Wikipedia:Reference desk/Mathematics: Difference between revisions
→Minkowski paper on L1 distance: new section |
|||
Line 271: | Line 271: | ||
(Consider the following in the context of (iii); it may provide a different way of analyzing the problem) Let <math>p(X,Y)=X+Y</math> and let <math>q(X)=X-Y</math>, and consider the cosets <math>r=p(X,Y)+I</math> and <math>s=q(X,Y)+I</math> in <math>F[X,Y]/(X^2-Y^2)</math>. Note that <math>rs=0</math>, <math>\frac{r+s}{2}=X</math>, and <math>\frac{r-s}{2}=Y</math>. Consider the ring <math>F[X,Y]/(XY)</math>. [[User:Point-set topologist|<font color="#000000">PS</font>]][[User talk:Point-set topologist|<font color="#000000">T</font>]] 11:08, 17 February 2010 (UTC) |
(Consider the following in the context of (iii); it may provide a different way of analyzing the problem) Let <math>p(X,Y)=X+Y</math> and let <math>q(X)=X-Y</math>, and consider the cosets <math>r=p(X,Y)+I</math> and <math>s=q(X,Y)+I</math> in <math>F[X,Y]/(X^2-Y^2)</math>. Note that <math>rs=0</math>, <math>\frac{r+s}{2}=X</math>, and <math>\frac{r-s}{2}=Y</math>. Consider the ring <math>F[X,Y]/(XY)</math>. [[User:Point-set topologist|<font color="#000000">PS</font>]][[User talk:Point-set topologist|<font color="#000000">T</font>]] 11:08, 17 February 2010 (UTC) |
||
:In other words, use the ring version of the [[Chinese remainder theorem]].—[[User:EmilJ|Emil]] [[User talk:EmilJ|J.]] 12:26, 17 February 2010 (UTC) |
:In other words, use the ring version of the [[Chinese remainder theorem]].—[[User:EmilJ|Emil]] [[User talk:EmilJ|J.]] 12:26, 17 February 2010 (UTC) |
||
::It's not really the CRT, which only applies if the two ideals are coprime. There's something slightly different going on though: what you get, as PST observes, is a fibre product (pullback). This example should be an instance of a more general result about fibre products, something like: let R be a ring with max ideal m such that R/m=F. Let I,J be ideals such that I+J=m. Then R/IJ is iso to the fibre product of R/I and R/J [[Special:Contributions/129.67.37.143|129.67.37.143]] ([[User talk:129.67.37.143|talk]]) 21:16, 17 February 2010 (UTC) |
|||
== LaTeX question, columns == |
== LaTeX question, columns == |
Revision as of 21:16, 17 February 2010
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
February 11
Name that series
Does the series have a name, such as the harmonic series or the power series does?58.147.58.179 (talk) 04:42, 11 February 2010 (UTC)
- I think it's a power series; put x=1/2 and the a's to be the natural numbers Money is tight (talk) 08:40, 11 February 2010 (UTC)
- So it is. I linked to the power series, but didn't look at that page because I was erroneously equating them with geometric series, when in fact geometric series are just one example of power series. Thanks. 58.147.58.179 (talk) 10:29, 11 February 2010 (UTC)
- And the formula will work if you replace the number 2 on either side of the equation with any complex number ω, provided that the modulus |ω| < 1. You can prove it by finding the radius of convergence of the given power series. •• Fly by Night (talk) 17:28, 11 February 2010 (UTC)
- You seem to be a bit confused. The power series is the Taylor series of about 0, and has radius of convergence 1. Thus, for any z with |z|<1, we have . The OP's series is the z=1/2 case of this. Algebraist 17:41, 11 February 2010 (UTC)
- You're exactly right. I have your same exact expression saved in an earlier Matlab file. Not sure why I wrote what I did. Quite embarrassing really. Thank you very much for pointing out that error. •• Fly by Night (talk) 01:35, 12 February 2010 (UTC)
- You seem to be a bit confused. The power series is the Taylor series of about 0, and has radius of convergence 1. Thus, for any z with |z|<1, we have . The OP's series is the z=1/2 case of this. Algebraist 17:41, 11 February 2010 (UTC)
- And the formula will work if you replace the number 2 on either side of the equation with any complex number ω, provided that the modulus |ω| < 1. You can prove it by finding the radius of convergence of the given power series. •• Fly by Night (talk) 17:28, 11 February 2010 (UTC)
- So it is. I linked to the power series, but didn't look at that page because I was erroneously equating them with geometric series, when in fact geometric series are just one example of power series. Thanks. 58.147.58.179 (talk) 10:29, 11 February 2010 (UTC)
linear programming
I've been using linear programming to find solutions for minimizing ghe cost of making several products which is a comm application. I have noticed, however, that I cannot make just half the resulting number of products for half the minimum cost. How is this phenomenon explained? 71.100.8.16 (talk) 11:06, 11 February 2010 (UTC)
- Fixed costs? Or perhaps you're using some kind of batch production that is less efficient for smaller batches? Perhaps there are minimum order quantities for your materials? It's hard to say unless you give more information. —Bkell (talk) 15:05, 11 February 2010 (UTC)
- It could be Economies of scope, though that doesn't happen only with the linear case. Pallida Mors 19:04, 11 February 2010 (UTC)
Symmetry groups
Given a group G, is it possible to construct a space for which G is the symmetry group? I thought at first that this was clearly untrue, since symmetry groups typically have functions as members, and so what does it mean to say that, say, Z mod 5 is the symmetry group of a space? But then I remembered that one of the goals of representation theory is to represent groups as matrix groups, so it didn't seem entirely unlikely. Is there a standard method of constructing the space, if true, and if untrue, does it work for some restricted class of groups we might consider?
The thought occurred to me today during a topology lecture, while discussing the construction of Eilenberg-MacLane spaces, and I wasn't sure where to even begin looking for an answer! (The reference desk seemed a good start...) Thanks for the help, Icthyos (talk) 17:33, 11 February 2010 (UTC)
- It's sufficient to say that is isomorphic to the "real" symmetry group of the space; depending on how generally you mean to construe "space", the obvious example for that case is just 5 separate lines (or points, or planes, or...) with the group elements shuttling points cyclically among them. If you admit such examples, I'm sure any group can be used. --Tardis (talk) 18:23, 11 February 2010 (UTC)
- Interpreting "space" very broadly, Cayley's theorem is the canonical result here. Algebraist 18:25, 11 February 2010 (UTC)
It is a theorem of Johannes De Groot that every group is the group of homeomorphisms of a compact Hausdorff space. As Algebraist hints, the idea of De Groot's proof is to use Cayley graphs, but then to replace the edges of the graphs by asymmetric subspaces. —David Eppstein (talk) 02:35, 12 February 2010 (UTC)
- David Eppstein, are there more topological spaces than groups? If we map each space to its automorphism group, the theorem of Johannes De Groot shows this is a surjection. Pretty weird I'm sure we can define for each group some space and then show this is a surjection too. Money is tight (talk) 08:53, 12 February 2010 (UTC)
- What do you mean by "more"? Both topological spaces and groups form a proper class. —David Eppstein (talk) 04:35, 13 February 2010 (UTC)
- I know they form proper class, but that only means we can't talk about them in ZFC. But we can use NBG, where proper classes exist in the object language. In the article Axiom_of_limitation_of_size, it's an example of where we can talk about class functions; i.e. the function itself is a proper class of ordered pairs. Money is tight (talk) 06:25, 13 February 2010 (UTC)
- That very axiom tells you that all proper classes have the same cardinality. In this specific example, I think that you don't even need the axiom to prove that both classes are equinumerous with V.—Emil J. 13:38, 15 February 2010 (UTC)
- I know they form proper class, but that only means we can't talk about them in ZFC. But we can use NBG, where proper classes exist in the object language. In the article Axiom_of_limitation_of_size, it's an example of where we can talk about class functions; i.e. the function itself is a proper class of ordered pairs. Money is tight (talk) 06:25, 13 February 2010 (UTC)
- What do you mean by "more"? Both topological spaces and groups form a proper class. —David Eppstein (talk) 04:35, 13 February 2010 (UTC)
Thanks for the replies, this has given me much to ponder. (Woops, wasn't signed in) Icthyos (talk) 15:28, 14 February 2010 (UTC)
Summing a series
I found by experimentation that the sum of the series 1/2! + 2/3! + ... + n/(n+1)! appeared to be 1 - 1/(n+1)!, then proved this by induction. Can the sum be derived in any other way?—86.166.204.98 (talk) 20:14, 11 February 2010 (UTC)
- . See also telescoping series. -- Meni Rosenfeld (talk) 21:21, 11 February 2010 (UTC)
(OP)Thanks. I saw how to do it soon after posting by taking the term from RHS to LHS, whereby everything on the left collapsed to 1.—86.160.104.79 (talk) 13:44, 13 February 2010 (UTC)
Variability of taylor expansions in the complex plane under different branch-cuts
Hi all,
I've just completed the first part of the following problem:
Find the first two non-vanishing coefficients in the Taylor expansion about the origin of the following functions, assuming principal branches for (i), (ii) and (iii), making use of (where appropriate) series expansions for log(1+z), etc.
i)
ii)
iii)
iv)
Now I just figured they can't ask this question unless the functions are analytic about the origin, in which case the derivative should be the same in all directions at the origin, namely that along the real line, so then I just calculated the Taylor series as if they were real functions, getting:
i)
ii)
iii)
iv)
However, I didn't use any of the standard series expansions for these calculations, did I make things overly complicated for myself? Anyway, my problem is, I fear I've oversimplified things, because the next part of the question asks how each answer would differ if we didn't take the principal branch in each case? I also need to find the range of values for z for which each series converges. What have I done wrong, because I can't see how my answers would change if we took a different branch cut - are the answers even correct? Thanks very much for any explanation - and also, what would be the most sensible approach for finding the range of convergence for each series?
Many thanks in advance! 131.111.185.68 (talk) 20:20, 11 February 2010 (UTC)
- The answers are correct. To hint at where the principal branch comes into play, consider the fact that if a non-principal branch was used, the answers to 1 and 2 could be and . -- Meni Rosenfeld (talk) 21:28, 11 February 2010 (UTC)
euromillions lotto rollover
prize fund is £113 million cost of ticket £1.50 odds of winning 76 million to one. Is it worth purchasing the ticket? —Preceding unsigned comment added by 89.241.185.253 (talk) 21:34, 11 February 2010 (UTC)
- Yes, think of all those important government projects you'll be funding !
- Mathematically probably not. Although it sounds like a good bet, each time a big lottery gets in such a position it becomes very popular, and a lot of people place bets. So you're not betting on getting £113 million, you're betting on getting a fraction of that. And the bigger the pot the more chance it will be split, the more ways it will likely be split, as more people bet. Every once in a while someone will collect a big pot, but more often it will be won by more than one ticket I think.--JohnBlackburnewordsdeeds 22:19, 11 February 2010 (UTC)
- People win the lottery without purchasing a ticket. Therefore, the odds of winning with purchasing a ticket and without purchasing a ticket are nearly identical. -- kainaw™ 22:23, 11 February 2010 (UTC)
- Just checked for a recent example and these siblings just won $100,000 this week without purchasing a lottery ticket. It was given to them as a birthday present. -- kainaw™ 22:31, 11 February 2010 (UTC)
- The odds of winning the jackpot aren't useful, since that doesn't take into account the jackpot being split between multiple people or the smaller prizes. You need to know the total prize fund (the number you've quoted is actually the jackpot) and the total number of tickets bought in order to work out the expectation. A single rollover (which I think this is) is very unlikely to have a positive expectation. Multiple rollovers can, in theory. --Tango (talk) 22:52, 11 February 2010 (UTC)
- The concept of the Kelly criterion is you can compute the optimal number of tickets to buy based on the size of your current bankroll and the winning probability and expectation. E.g., if the expected value is positive, then it makes sense to buy more than zero tickets, but since the probability of actually winning is small (the positive expectation comes from the size of the payout if you do win), you should not convert all your savings into lottery tickets. For almost everybody, the optimal number of tickets is much less than 1 ticket (i.e. running the numbers shows you should only bet a fraction of a cent rather than £1.50) and since that fractional bet is not possible, it's better to not bet at all. Buying a lottery ticket turns out to be a good investment (per Kelly) only if you are already a billionaire. 66.127.55.192 (talk) 23:19, 11 February 2010 (UTC)
- A bit of math. From the news the jackpot was £85m a week ago. No-one won, but the jackpot gone up by £30m. But only 32% of winnings goes to the prize fund, so about £100m was added to all prize funds. And as only 50% of ticket sales go to winnings, about £200m was spent on tickets over the last week.
- At £1.50 a ticket that's maybe 130m tickets bought to date. Still a day to go so probably a few tens of millions, if not more, will be sold this draw. The expected number of winners with 1 in 76 million is probably more than two, and the chance of there being only one winner is maybe less than 50%. And if there's only one winner they have not 1 in 76 million, but 1 in however many tickets sold chance: maybe 1 in 150 million. With the 50% that's a 1 in 300 million chance of scooping the whole jackpot.
- I.e. the chance of winning a share of the jackpot is still 1 in 76 million, it's just very likely that it will be shared. Based on the above back of an envelope calculations it still seems a very poor bet (except for the lottery company).--JohnBlackburnewordsdeeds 23:23, 11 February 2010 (UTC)
- And don't forget about the taxes on the winnings. Rckrone (talk) 17:06, 12 February 2010 (UTC)
- There aren't any. --Tango (talk) 23:43, 12 February 2010 (UTC)
- And it turns out there were two winners, matching my rough calculations above (though the chance of it being exactly two was I think less than half). One winner was in the UK and won £56 million: at £1.50 a ticket and odds of 76 million to one a poor bet.--JohnBlackburnewordsdeeds 23:49, 12 February 2010 (UTC)
- There aren't any. --Tango (talk) 23:43, 12 February 2010 (UTC)
- The Kelly Criterion is irrelevant - the expectation for a lottery are almost always negative. The Kelly Criterion tells you how high the expectation needs to be for it to be worth you buying a ticket, but we know that will always be a positive number. The only way to have a positive expectation on a lottery (excluding multiple rollovers) is to be the one running the lottery. --Tango (talk) 13:43, 13 February 2010 (UTC)
- And don't forget about the taxes on the winnings. Rckrone (talk) 17:06, 12 February 2010 (UTC)
February 12
Definition of Fundamental Domain from MathWorld--correct?
"Let be a group and be a topological G-set. Then a closed subset of is called a fundamental domain of in if is the union of conjugates of , i.e., and the intersection of any two conjugates has no interior. For example, a fundamental domain of the group of rotations by multiples of in is the upper half-plane and a fundamental domain of rotations by multiples of is the first quadrant . The concept of a fundamental domain is a generalization of a minimal group block, since while the intersection of fundamental domains has empty interior, the intersection of minimal blocks is the empty set."
- I hope this is right, since the last sentence is due to me. I would say if it is right , that would be nice, because it is concise and seems easy to understand and use. The current wikipedia article, like many wikipedia math articles (such as how affine space USED to be)go on and on without giving an immediate, precise definition. Maybe that's necessary in the nature of these things and the clear, concise definition on MathWorld is wrong? I am genuinely concerned that I have contributed a plausible, userfriendly but wrong explanation to MathWorld.Rich (talk) 03:05, 10 February 2010 (UTC) Pasted to Question Desk from Fundamental Domain(Talk) just now.Rich (talk) 00:14, 12 February 2010 (UTC)
Obviously you pasted the above and some symbols got omitted. Please try again. Michael Hardy (talk) 01:40, 12 February 2010 (UTC)
- OK, I wondered which term was to be defined, and I was annoyed that your posting didn't tell us. But after getting a bit into it, I guessed that it's "fundamental domain". So I found that on MathWorld. I'm wondering about the word "conjugate". The way that word is used when speaking of conjugates of members of a group would make me expect to see gFg−1 or the like instead of gF. But I'm not an expert in this area. Michael Hardy (talk) 01:47, 12 February 2010 (UTC)
- Here's an improved transcription from Mathworld:
"Fundamental Domain
Let G be a group and S be a topological G-set. Then a closed subset F of S is called a fundamental domain of G in S if S is the union of conjugates of F , i.e., S=Union over all g in G of all gF, and the intersection of any two conjugates has no interior. For example, a fundamental domain of the group of rotations by multiples of 180 degrees in R^2 is the upper half-plane and a fundamental domain of rotations by multiples of 90 degrees is the first quadrant. The concept of a fundamental domain is a generalization of a minimal group block, since while the intersection of fundamental domains has empty interior, the intersection of minimal blocks is the empty set." (Transcribed from Eric Weisstein's MathWorld)Rich (talk) 23:24, 14 February 2010 (UTC)
- The standard definition of a group action requires that the group act on a set from either the right or the left, so it is meaningful to interpret gFg−1 as gg−1F which is F for all g. Thus, it would not be very interesting to study the conjugates of F in that sense, and hence gF is referred to as the conjugate of F instead. (This seems the best explanation of the terminology "conjugation" in the given context; perhaps I am missing something, but I cannot think of another motivation for the terminology) Of course, there are other meaningful ways to merge conjugation with the theory of group actions; for instance, we can allow G to act on itself via conjugation.
- With regards to the question, I am fairly certain that the last sentence is correct. In the most basic sense, for "A" to be a "generalization" of "B", every instance of B should be an instance of A. If we equip the set on which a group acts with the discrete topology (in fact, any topology finer than the smallest topology under which all minimal group blocks are closed will do), the instersection of the minimal group blocks is the empty set which of course has empty interior. The concept of a fundamental domain generalizes that of a minimal group block because the requirement on the intersection is less stringent; if the intersection were empty the requirement would be satisfied, but it would also be satisfied even if the intersection is more general (namely, has empty interior). Hope this helps. PST 03:45, 12 February 2010 (UTC)
- A group block does not need to be closed in a topological sense so there are blocks that aren't fundamental domains. On the other hand the examples given in the MathWorld article are fundamental domains without being group blocks. Even if you assume a finite set with the discrete topology the concepts are still different. With a fundamental domain F=Fg implies g=1 but that's not the case with a block.--RDBury (talk) 23:37, 12 February 2010 (UTC)
- What Minimal group blocks fail to be closed? Thanks, Rich (talk) 01:20, 13 February 2010 (UTC)
- A group block does not need to be closed in a topological sense so there are blocks that aren't fundamental domains. On the other hand the examples given in the MathWorld article are fundamental domains without being group blocks. Even if you assume a finite set with the discrete topology the concepts are still different. With a fundamental domain F=Fg implies g=1 but that's not the case with a block.--RDBury (talk) 23:37, 12 February 2010 (UTC)
Let T be the smallest topology relative to which all the minimal group blocks in the G-set are closed. If we equip the G-set with certain topologies finer than T (such as T itself, or even the discrete topology), it can be ensured that all minimal group blocks are closed and that the G-set is in fact a topological G-set. Of course, fundamental domains are not necessarily minimal group blocks as RDBury points out. But fundamental domains are generalizations of minimal group blocks for transitive G-actions. Otherwise, they are not. For if we let G act trivially on a set S, the singleton sets in S are minimal group blocks but the orbit of any single point in S is trivial and thus no minimal group block can be a fundamental domain (no matter what topology we equip on the G-set).
But to clarify once more, it is indeed the case that, depending on the topology, all minimal group blocks are closed. Although the set on which the group acts initially does not have a topology, you can equip the set with a topology such that all minimal group blocks are closed (and of course such that the G-set is a topological G-set). (I made a silly error in my last post; unless we restrict our attention to transitive group actions, it is not necessarily the case that fundamental domains are generalizations of minimal group blocks). PST 05:10, 13 February 2010 (UTC)
- Thanks Michael, RDBury, and PST for your helpful thoughts.98.207.84.24 (talk) 04:44, 14 February 2010 (UTC)
Integration of square root in the complex plane
Hi,
Just a quick one: I want to find on the principal branch of the square root, where gamma is the circle |z|=1 and then the circle|z-1|=1, of radii 1. I've tried to parametrize in the obvious way, , respectively, and then I accidentally integrated for theta between 0 and 2pi - I think, retrospectively, this is wrong, since I got solutions out of -4/3 and 0: I have a feeling I should have gone from -pi to pi, but my work has been marked with my first answer correct and my second one wrong: if going from -pi to pi however, I wouldn't get -4/3 for the first integral, would I? So has my work just been marked correct wrongly, if that makes any sense?
In general, when integrating something with a branch cut, do we just choose our parameter range so that it 'goes around' the branch cut, e.g. -pi to pi doesn't cross the negative real axis, rather than 0 to 2pi which does? Thanks in advance, I don't need much detail in your answers, just yes/nos would be fine unless I've misunderstood something! 82.6.96.22 (talk) 05:47, 12 February 2010 (UTC)
- You can choose the parameter range however you want, but be prepared for the possibility that the formula for the integrand might not be simple. For you have , while for , which goes around the branch cut, you can simply use .
- I think the answer to the second question is indeed 0, and for the first it is actually . -- Meni Rosenfeld (talk) 08:35, 12 February 2010 (UTC)
Stats resources/texts
Please recommend a stat resource/text that teaches you all the different manipulations involving expectation and variance. The stat text by Devore doesn't have comprehensive intro to these things. Eg. V(constant*random variable) = constant^2 * V(random variable) and many other manipulations involving summation sign and more than one random variable and constants. —Preceding unsigned comment added by 142.58.129.94 (talk) 19:58, 12 February 2010 (UTC)
- This is mostly about probability rather than statistics, but it's online and covers the topics you mention. 66.127.55.192 (talk) 08:50, 13 February 2010 (UTC)
Integer closure under addition
I am currently somewhat confused about this. One article says that the integers are closed under addition and subtraction, whereas another one seems to give an example where they are not. What am I missing? Thanks in advance. It Is Me Here t / c 21:22, 12 February 2010 (UTC)
- The integers are closed under finite sums; the second link refers to an infinite sum (not a convergent one either; its sum is 1/2 by a nonstandard definition). Here's another example of an infinite sum of integers whose value is not an integer:
- 1+1+1+1+1+1+1+1+1+1+...
- The sum of that sequence is either 'undefined' or infinity, depending on context (it's not even a real number!). My point in bringing that up is that it's easy to see that integers aren't closed under infinite sums.
- HTH, --COVIZAPIBETEFOKY (talk) 21:31, 12 February 2010 (UTC)
Closure under addition merely means that the sum of two integers is an integer. Addition is thought of in that context as a binary operation.
A consequence is that the sum of any finite number of integers is an integer. Michael Hardy (talk) 23:31, 12 February 2010 (UTC)
- An example where subtraction isn't closed is for the non-negative integers 0, 1, 2, 3 etc. With those 3−2 gives 1 but 2−3 doesn't have a value so subtraction isn't closed. In topology you'll find an interesting combination, an infinite union of open sets gives an open set but only a finite number of intersections is guaranteed to give an open set. Dmcq (talk) 10:24, 13 February 2010 (UTC)
- Thanks, all. It Is Me Here t / c 11:33, 13 February 2010 (UTC)
"subtraction isn't closed is for the non-negative integers" is a clumsy phrasing. "The set of all non-negative integers isn't closed under subtraction", or just "The non-negative integers are not closed under subtraction" is standard language. Michael Hardy (talk) 14:11, 14 February 2010 (UTC)
0.18, 9.45, and 0.38 estimation
Is Pluto's diameter 0.18 clser to 1/5 the size of earth of 1/6 the sizeof earth. Is this conventional to say Saturn is 9 times larger than earth or 10 times larger than earth. for Mercury is it conventional to say 1/3 the size of earth or 2.5 times smaller than earth?--69.233.255.251 (talk) 21:15, 12 February 2010 (UTC)
- If you want to know whether 0.18 is closer to 1/5 or a 1/6, try a calculator. --Tango (talk) 23:49, 12 February 2010 (UTC)
- Also keep in mind that "one time larger" means "twice as large," so "9 times larger" means "10 times as large," and "2.5 times smaller" doesn't make sense. —Bkell (talk) 00:49, 13 February 2010 (UTC)
- That doesn't seem correct to me. I'm pretty sure in normal English "9 times larger" means the same thing as "9 times as large". 75.142.246.117 (talk) 04:19, 13 February 2010 (UTC)
- After some googling it seems like the usage is mixed. I think I'm technically wrong about the meaning, but "times larger" is going to cause confusion. 75.142.246.117 (talk) 04:29, 13 February 2010 (UTC)
- Yes, it's ambiguous. "900% larger" definitely means "10 times as large", but "9 times larger" is unclear and should be avoided. --Tango (talk) 13:46, 13 February 2010 (UTC)
- Those who care about precision of language could say "larger by a factor of 9" to avoid the confusion if they insisted on using "larger". Colloquially, many imprecise people use "9 times larger" when they mean "9 times as large", and some of them can't even see the difference! The phrase "2.5 times smaller" doesn't make any logical sense, but it is used surprisingly often to mean "smaller by a factor of 2.5". Dbfirs 22:37, 13 February 2010 (UTC)
- Yes, it's ambiguous. "900% larger" definitely means "10 times as large", but "9 times larger" is unclear and should be avoided. --Tango (talk) 13:46, 13 February 2010 (UTC)
February 13
linear programming
When setting up constrains when using the simplex method can you use two equations for the same variable such as A=>B and A<=C in order to achieve an overall constrain that is a value between B and C? 71.100.8.16 (talk) 00:22, 13 February 2010 (UTC)
- If B and C are also variables you'd set it up as B-A<=0, A-C<=0. Or introducing slack variables, B-A+X=0, A-C+Y=0.--RDBury (talk) 06:19, 13 February 2010 (UTC)
Lottery
Sorry if this is a stupid question, I'm not good at statistics:
Situation A:
1000 people each buy a lottery ticket for draw # 1
Situation B:
100 people each buy a lottery ticket for draw # 1; 100 other people each buy a lottery ticket for draw # 2; 100 other people each buy a lottery ticket for draw # 3; and so on for a total of 10 draws.
Is the probability that someone will win a prize equal in both situations? Poolofwater (talk) 15:54, 13 February 2010 (UTC)
- That's going to depend on the rules of the lottery. How are winners selected and what prizes are there? --Tango (talk) 15:55, 13 February 2010 (UTC)
- If the tickets are chosen at random yes. E.g. in theory someone in each of the 100 could win, so they win ten prizes. But also in theory the people in situation A could buy ten tickets with the same number that win. The chance might be vanishingly small in each case though.
- If it's one person buying all the tickets then one way to increase your chance of winning a prize is but tickets with different numbers. E.g. if each draw has only 1000 numbers you can buy all 1000 in situation A and so guarantee a prize. In situation B you could still lose each draw. Of course real lotteries tend to have have much longer odds than 1 in 1000, so even if you bought your tickets at random they would likely all be different so the odds would be almost the same. And this is just your chance of winning something. Your expected winnings are the same whether you choose tickets at random or not.--JohnBlackburnewordsdeeds 15:57, 13 February 2010 (UTC)
- Thanks very much! Poolofwater (talk) 16:05, 13 February 2010 (UTC)
February 14
Integration
Can anyone tell me what ∫x dx({} refers to the "fractional part" function) will be in the limit of 2 to 20. Actually, it'll be better if you gave me the reasons rather than the actual steps. I mean, from 2 to 20, there are an awful number of numbers having non-zero fractional parts (apart from the integers, obviously). So how on earth can anyone integrate it? It can't be integrated piece-wise either. I tried to. Can anyone help?117.194.225.72 (talk) 12:10, 14 February 2010 (UTC)
- Did you mean to put the curly brackets round x in the integration? Not that I recognize that notation anyway. It sounds like you are simply supposed to integrate the sawtooth wave function. Dmcq (talk) 12:32, 14 February 2010 (UTC)
- (edit conflict) You want to integrate the fractional part of x from x = 2 to x = 20 ? Remember that ∫ f(x) dx from a to b is just the (signed) area enclosed between the graph of f(x) and the x-axis between x = a and x = b. This is quite straightforward to determine geometrically when f(x) is the fractional part of x. Note that this is the same as integrating piece-wise - between n and n+1 the fractional part of x is x-n, so you can integrate these pieces and then add them together. Gandalf61 (talk) 12:39, 14 February 2010 (UTC)
The fractional-part function is periodic, so if you integrate it from 0 to 1, you get the same thing as if you integrate it from 2 to 3, or from 3 to 4, or from 4 to 5, etc. There are 18 such intervals between 2 and 20, so the integral from 2 to 20 is 18 times that from 0 to 1. For x between 0 and 1, the fractional part of x is just x itself. So
Michael Hardy (talk) 14:09, 14 February 2010 (UTC)
What about the fractional part of numbers between 0 to 1 (having non-zero fractional part)? 0.556 ,for example? They should be taken into account too, right?In that case, integrating it should give an answer other than the value of x itself.... Correct me if I'm wrong. 117.194.231.5 (talk) 15:46, 14 February 2010 (UTC)
- If x = 0.556, then the fractional part of x is also 0.556, i.e. the fractional part of x is x itself, just as stated above. Michael Hardy (talk) 22:40, 14 February 2010 (UTC)
- Consider what the difference is between the fractional part of x and x itself between 0 and 1. Also, you should revisit your original impression that it can't be integrated piece-wise. Why did you say that? Finally, the suggestion above to consider its graph is a great idea. Have you sketched out a quick graph of the function and shaded in the area of integration yet? 58.147.58.28 (talk) 16:06, 14 February 2010 (UTC)
Thank you everyone. I just realised where I went wrong. I'm new to this stuff, so, while I knew that ∫f(x)dx is actually the area bounded by the function's graph and the x-axis, while trying to solve the above problem, I was constantly thinking of the summation of all the values of {x}. That would have made a different problem, something that went like: 0+0.001+0.002+0.003 and so on.. Thanks a bunch. Sketching the saw-tooth graph helped a lot. As simple as calculating the area of a triangle and multiplying it by 18, right? 117.194.227.81 (talk) 05:32, 15 February 2010 (UTC)
- Correct. Michael Hardy (talk) 06:05, 15 February 2010 (UTC)
Sum of all combinations of n
What is the equation for the sum of all unique combinations of n objects. Eg, if n=2 (let's call the objects, A and B), the sum is 3 (combinations are A, B, and AB); for n=3, sum is 7 (combinations are A, B, C, AB, AC, BC, ABC). I know this seems similar to triangular numbers which is the sum of unique interactions between 2 objects among n numbers. Ephilei (talk) 19:06, 14 February 2010 (UTC)
- Never mind. I found it. It's 2^n-1, one less than n's power set.Ephilei (talk) 19:23, 14 February 2010 (UTC)
- Actually if you consider the empty subset too, they are exactly 2n. --84.221.209.11 (talk) 20:47, 14 February 2010 (UTC)
- It is not very hard to prove either. We need to know about the binomial theorem and the binomial coefficients. Let us say that you have n objects. How many ways to pick none? There are nC0 (read as n choose none) ways of doing that. How many ways to choose one? There are nC1 (read as n choose one) ways of doing that. How many ways to choose two? There are nC2 ways of doing that. ... How many ways to choose n? There are nCn ways of doing that. So you need to make the following addition: nC0 + nC1 + nC2 + ... + nCn. This is given by applying the binomial theorem to (1+1)^n (which equals 2^n since 1+1=2). But you don't seem interested in the number of ways of choosing none, so we subtract nC0 = 1. That means there are 2^n-1 ways. •• Fly by Night (talk) 22:08, 14 February 2010 (UTC)
- Quicker is to consider 3-digit binary numbers, with (say) A corresponding to a "1" in the RH position, B to a "1" in the central position and C to a "1" in the LH position. Then there are 8 possibilities 000, 001, 010, 011, 100, 101, 110 and 111, leaving 7 = 2^3 - 1 if the one containing neither A, B nor C is removed. Obviously extendable to any value for the number of objects—86.132.238.199 (talk) 23:06, 14 February 2010 (UTC)
- Interesting! Thanks Ephilei (talk) 16:06, 15 February 2010 (UTC)
February 15
Definition of
Why do we typically define to be , instead of ? Yakeyglee (talk) 03:11, 15 February 2010 (UTC)
- Both and are square roots of . Often the expression is avoided, because it's ambiguous; you're right when you say there isn't any particular reason it should be rather than . But when this expression is used, it is typically meant to stand for a single value, not two different values. The same goes for the expression , which always means just 2, not ±2, even though both 2 and −2 are square roots of 4. (The square root symbol , when applied to a nonnegative real number, always indicates the principal square root, not just any square root. However, there is no compelling reason to say that , rather than , should be the "principal" square root of −1.) —Bkell (talk) 03:26, 15 February 2010 (UTC)
- To add to that, there's really nothing to distinguish the two square roots of -1. We arbitrarily pick one of them to be the principle square root Then the other one must be . However, if we switched them, the complex numbers would look the same. Rckrone (talk) 05:08, 15 February 2010 (UTC)
- Also notice that defining to be isn't a great notation in computations, for then e.g. should mean either or , and so on.--pma 18:09, 15 February 2010 (UTC)
- The radical square root sign indicates the positive root by convention. For example, has two solutions, . Of course, two numbers square to 2. Similarly, three (complex) numbers cube to 2, and so on. Often the notation denotes the set of all z powers of the complex number . When z is an integer, there is only one power (or none, as in 1/0). When z is fractional or complex, there can be multiple powers. Tbonepower07 (talk) 04:13, 16 February 2010 (UTC)
What's the name for the argument of the logarithmic function?
The exponentiation ax involves two numbers, each of which has its own name ("base" and "exponent") within the term ax, so that the exponential term ax can be read explicitly: "exponentiation of the base a, to the exponent x".
How about the logarithm logax ? Note that it involves two numbers as well, a being the base - as before; however, does x have - also here - its own name within the term logax ? In other words: how should the logarithmic term logax be read? "Logarithm of the...(???) x, to the base a...? HOOTmag (talk) 20:50, 15 February 2010 (UTC)
- Actually I'd say "a to the x" for the exponentiation. I may be wrong but I don't believe there is any special name for the argument of log. And I'd say something like "the log of x" or just "log x" where there's lots, or "log a of x" when the base needs to be stated. Dmcq (talk) 21:15, 15 February 2010 (UTC)
- I was taught "log x, base a", for what that counts. - Jarry1250 [Humorous? Discuss.] 21:31, 15 February 2010 (UTC)
- Yes, I know that, just as you can say "exponentiation of a, to the exponent n". However, when reading the exponential term ax you can also use the explicit name "base" for a, and say: "exponentiation of the base a to the exponent n". My question is about whether you can also use any explicit name for x - when reading the logarithmic term logax, i.e. by saying something like: "Logarithm of the blablabla x, to the base a"...
HOOTmag (talk) 08:53, 16 February 2010 (UTC)
February 16
Round 0.45 or 0.49 or 0.50
For 5.55555 would be closer to 6? for 9.45 rounds to 9.5 bt I dunno if 45 cound round the 9 to a 10 or it pulls it back down to 9. In 8th grade math somebody told me .49 or lower I round back down .51 or above I round the number up. Would .50 bring the whole number up or keep it down? --69.229.36.56 (talk) 00:19, 16 February 2010 (UTC)
This is the famous "primary school mathematics". Most primary school "mathematics textbooks" dictate that 9.45 rounds to 9, and that 5.55555 rounds to 6. The number 9.5 will round to 10 because, for some weird reason, "primary school mathematics" thinks that 9.5 is closer to 10 than it is to 9. So .50 brings the number up but technically speaking, that convention makes little sense. When "rounding numbers" be sure to note that a decimal is rounded to the whole number nearest to it; therefore, .49 or lower is "rounded down" because .49 is closer to 0 than it is to 1, and .51 or above is "rounded up" because .51 is closer to 1 than it is to 0. And your "primary school mathematics teachers" will probably tell you that .50 is rounded up, but they are sure to be clueless if you ask them why (I am clueless as well but I am not the one who created or practiced that convention...). PST 02:18, 16 February 2010 (UTC)
- I like to think of it this way: If you're piloting a plane, and you get half way with half a tank of fuel left, are you going to go back to fill up or carry on to your destination? This has no bearing on mathematics, but perhaps the assumptions that underlie the convention (see half-full glass) are related. Or, I might be spouting nonsense. —Anonymous DissidentTalk 11:59, 16 February 2010 (UTC)
- See Rounding. And always rounding up has no great merits compared to rounding down. Rounding to even will give better overall behaviour, there is an interesting story there about a stock exchange always rounding down. Dmcq (talk) 02:29, 16 February 2010 (UTC)
- @PST: the obvious answer is that in the absence of any pressing arguments for either direction, rounding .50 up is preferable because it makes a simpler rule ("round it down if the next digit is 0 to 4, and round it up if it is 5 to 9").—Emil J. 11:46, 16 February 2010 (UTC)
- I think the poster was thinking that perhaps rounding twice in succession with an intermediate precision should lead to the same result as rounding once to the final precision. There's no reason for that to hold and 'double rounding' is an implementation problem with java on ix86 pc's because of them sometimes holding floating point in registers to higher precision than they store them. Dmcq (talk) 12:00, 16 February 2010 (UTC)
- I always used to explain that the rule (for science) is to round up from half-way because the percentage error will then always be smaller, but that, in statistics, a different rule such as "round to even" would help to avoid bias. Dbfirs 22:52, 16 February 2010 (UTC)
- By the same idea one can sometimes defend rounding with the division at the geometric mean of the two possibilities so when rounding to either 9 or 10 the mid point is 9.486 Dmcq (talk) 23:08, 16 February 2010 (UTC)
Cauchy Sequence Ring and Null Sequences - Square root of 2
Hi all,
I've been playing around with the Ring of Cauchy Sequences in the rational numbers C and the subset of Null sequences N (tends to 0 as n tends to infinity). I've shown that N is a maximal ideal in C, which means C/N is a field - 2 members in this field are equal if they differ by a null sequence I believe, a.k.a. they share the same limit as n tends to infinity - so we can find a subfield of all the sequences which have a rational limit and identify that with the rationals themselves: however, I'm told that the solution x2=2 has a solution in this field, and I have no idea why. (Assuming I've got the right subfield - the problem i'm doing says deduce C/N is a field with a subfield which can be identified with Q - that must be the limits of the sequences, right?) - So would anyone be able to explain to me why the solution x^2=2 exists in this field, yet of course doesn't in the rationals themselves?
Many thanks, 82.6.96.22 (talk) 08:21, 16 February 2010 (UTC)
- Let be arbitrary, and let be a sequence of rational numbers converging to x (any such sequence is necessarily Cauchy). The coset of N (the ideal of null sequences) by "behaves like" the real number x relative to the other elements in . Therefore, we would expect that the field is in fact isomorphic to the field of real numbers (which contains the field of rational numbers as a subfield). Therefore, you are indeed correct that the field of cosets of N by elements in C that have rational limits is isomorphic to .
- If you would like a more formal proof of , consider the function defined by . If we consider an arbitrary Cauchy sequence c in C, it must have a real number limit x, whence (the limit of this difference is zero by standard limit laws); hence the element is the image of x under f, and the map f is surjective. By a similar "coset type argument" one can see that the map f is also injective (for completeness, note that which is in N if and only if ). It should now be clear that f is an isomorphism (being a bijective homomorphism).
- Now that has been established, it is easy to see that a solution to the equation exists in ; in fact, it can be shown that "many more" polynomial equations have solutions in this field, as there is nothing special about the given equation in the previous argument. Hope this helps (and feel free to ask any furthur questions if necessary). PST 09:35, 16 February 2010 (UTC)
- Oh I see - I tried to phrase the question as the original problem was worded, but it wasn't specific which field it was referring to which had a solution to x^2=2: I assumed they were talking about the subfield isomorphic to Q having a solution to x^2=2; do you think they meant the whole field C/N which is isomorphic to R having a solution? I just assumed it was the subfield because x^2=2 having a solution isn't really remarkable in R, but have I misunderstood things? Thanks very much - 82.6.96.22 (talk) 20:00, 16 February 2010 (UTC)
- (I spent the day thinking about it and couldn't work out a way to correspond any rational-limited sequence to x^2=2, so it'd make me feel a lot better to know I'd been trying to answer the wrong question! Although then, it seems more appropriate to ask why would we expect it not to have a solution in the first place?) 82.6.96.22 (talk) 20:08, 16 February 2010 (UTC)
- Well, if a field F is isomorphic to , it should have "the same algebraic structure as ". Therefore, if has the property that , the image of x under an isomorphism from F to (let us denote an arbitrary such isomorphism by "f") also has the same property (since ). In particular, the problem would not be correct if it required one to prove that a field isomorphic to contains a solution to the equation . Does this answer your questions?
- My first thought when you posed the problem was that it had little to do with analytic concepts (such as Cauchy sequences and limits) but more to do with algebra (algebra "disguised" within analysis, if you like). Thus I approached the problem by looking at it algebraically rather than analytically. PST 08:09, 17 February 2010 (UTC)
- Yes, I admit when I asked it, I was surprised that there could be something like an isomorphism preserving structure but somehow failing to retain such a fundamental property as the irrationality of square roots in Q - the question makes much more sense that way, thankyou ever so much for the help :) 82.6.96.22 (talk) 08:26, 17 February 2010 (UTC)
- (I spent the day thinking about it and couldn't work out a way to correspond any rational-limited sequence to x^2=2, so it'd make me feel a lot better to know I'd been trying to answer the wrong question! Although then, it seems more appropriate to ask why would we expect it not to have a solution in the first place?) 82.6.96.22 (talk) 20:08, 16 February 2010 (UTC)
- Oh I see - I tried to phrase the question as the original problem was worded, but it wasn't specific which field it was referring to which had a solution to x^2=2: I assumed they were talking about the subfield isomorphic to Q having a solution to x^2=2; do you think they meant the whole field C/N which is isomorphic to R having a solution? I just assumed it was the subfield because x^2=2 having a solution isn't really remarkable in R, but have I misunderstood things? Thanks very much - 82.6.96.22 (talk) 20:00, 16 February 2010 (UTC)
Solving sin(x^2)
A few days ago a student who I was tutoring had to find all solutions to . That was no problem, but it had occured to me that the only way I knew to solve something like was by treating it as a composition of functions. So I'm curious is there another relatively elementary way to solve this? A math-wiki (talk) 08:37, 16 February 2010 (UTC)
- Could you please clarify what exactly you mean? If we determine the set of all z such that the sine of z is , the set of all square roots of elements in this set would be the solution set of (which is, I think, what you meant by "solving the equation by treating it as a composition of functions"). Other than this method, I do not think that there exists an elementary method to solve these sorts of equations (and even if such an elementary method existed for , it is unlikely it would generalize to for an arbitrary function g whose inverse exists and is explicitly known). PST 09:45, 16 February 2010 (UTC)
I think I partially answered my own question, wouldn't taking the arcsin (as a multivalued function?, i.e. taking ALL solutions not just the ones the proper inverse function would give) then taking the square root of both sides accounting for +/- solutions to the root on the side without the variable work? That is;
(arcsin here is NOT the usual inverse function but rather a different operator all together that give the whole solution set for sin(x)=1/2)
The one problem i have with this is its essentially solving it in the follow manner, just written differently.
Let and , solve for x.
Then for all solutions C of , solve .
This is what I meant by solving it as a composition of function (e.g. was our original problem and all the solutions to are precisely the solutions to ) A math-wiki (talk) 09:12, 17 February 2010 (UTC)
Sum of reciprocals of cubics
I need to find the sum of the series
It's pretty obvious that the nth term is but I'm not sure if that helps me find the sum to n terms, and the sum to infinity.--220.253.101.175 (talk) 08:43, 16 February 2010 (UTC)
- I think you can write out that general term as a partial fraction decomposition giving you a sum like A/(3n-1) + B/(3n+2) + C/(3n+5) (you have to solve for the coefficients) and then you can handle those series one at a time. (Hmm, wait, that's probably no good, they would all diverge). There are powerful ways of doing these sums with contour integrals but that's probably not what you want, and I don't remember anything about how to do it any more. 66.127.55.192 (talk) 10:34, 16 February 2010 (UTC)
- No continue on with that fractional decomposition idea and write down the first few terms decomposed that way. You should spot something about the terms which makes things easy. Dmcq (talk) 11:44, 16 February 2010 (UTC)
- More details. This case is particularly simple because the coefficients A B C above give you a telescoping series (write the partial sum for n from 1 to m as a linear combination of the three partial sums, respectively with coefficients A B C; observe that they are in fact the same partial sum, up to a shift and up to the first and the last terms, and note that A+B+C=0). So this is just a three-term version of the easier telescopic sum of 1/n(n+1) shown in the link, and you can even write an analog closed formula for the sum for n from 1 to m. The analog, more general situation, where you don't have cancellations, may be treated using the logarithmic asymptotics for the finite sums , that produces an exact value for the sum of the series. (1/60) --pma 15:26, 16 February 2010 (UTC)
What's the name for a holomorphic function (e.g. a logarithmic function) not included in any other holomorphic function?
HOOTmag (talk) 09:31, 16 February 2010 (UTC)
- What do you mean by "included"? Staecker (talk) 15:27, 16 February 2010 (UTC)
- If "included" refers to inclusion of graphs, that is extension, I'd say "maximally defined" or "maximally extended" or "defined on a maximal domain of analyticity of its". Check also domain of holomorphy for related concepts. --pma 15:43, 16 February 2010 (UTC)
I mean a holomorphic function which can't be more extended analytically. Is the term "maximally" a common usage for holomorphic functions which can't be more extended analytically? And why shouldn't we call them simply: "maximal holomorphic functions"? HOOTmag (talk) 16:28, 16 February 2010 (UTC)
- As far as I see, the adjective "maximal" with "function" or so, usually refers to other partial orders than extension, and I suspect that "maximal holomorphic function" may leave some doubts on its meaning (although personally I would vote for it). By the way, "maximal solution" in the context of ODEs also sounds ambiguous (some use it the sense of extension, some in the sense of pointwise order). "Maximal holomorphic extension" (of a function/germ) or "maximally extended holomorphic function" is longer but doesn't seem to need explanation, and in fact reflects a standard general usage (e.g. "maximally extended" gets more than 20,000 google results). So, if you need a short form to be used several times e.g. in a paper I'd suggest to give the explicit definition first. Of course, the best should be finding an expression from an authoritative source. --pma 17:55, 16 February 2010 (UTC)
- In Google, the expression "maximally extended" appears in various contexts, including physics (like in "maximally extended universes"), geometry (like in "maximally extended polygons"), and the like. However, if you say that this expression "reflects a standard general usage" (in our context of holomorphic functions) then I accept your testimony.
- How about: "maximally extended analytically"? Is it a common expression as well? Anyways, it's less ambiguous, isn't it? HOOTmag (talk) 18:31, 16 February 2010 (UTC)
- Yes.. Also in google "maximal holomorphic extension" gives a hundred of results, among which some books and papers that may give you some hint, e.g. [1]--pma 21:59, 16 February 2010 (UTC)
- THANKXS. HOOTmag (talk) 01:42, 17 February 2010 (UTC)
- Yes.. Also in google "maximal holomorphic extension" gives a hundred of results, among which some books and papers that may give you some hint, e.g. [1]--pma 21:59, 16 February 2010 (UTC)
18*4/6+77-5
- Get a calculator. Turn it on. Type 18. Press x. Type 4. Press ÷. Type 6. Press +. Type 77. Press -. Type 5. Press =. I believe the calculator will display 84. -- kainaw™ 15:49, 16 February 2010 (UTC)
- Not if it is a reverse Polish notation calculator! Nimur (talk) 15:54, 16 February 2010 (UTC)
- Depending on the reason the question was asked, order of operations might help. - Jarry1250 [Humorous? Discuss.] 15:58, 16 February 2010 (UTC)
Sobbmub, please do not re-post the same question multiple times. You've already received answers here. Nimur (talk) 16:06, 16 February 2010 (UTC)
- If you search using google just stick the expression in as your search. The nice people at google will quickly work it out on their calculators and send the result back to you. Much easier and faster than asking the wikipedia reference desk. Dmcq (talk) 17:06, 16 February 2010 (UTC)
- Aren't they supposed to be imps?—Emil J. 17:18, 16 February 2010 (UTC)
Cancel the 18 and the 6 to get 3, BEFORE multiplying. It's not strictly necessary to do it that way, but in some contexts it's useful, so I make a habit of it. Michael Hardy (talk) 22:05, 16 February 2010 (UTC)
math conversion
is there a website where I can learn how to convert a fraction into a percentage, convert percentage into a fraction, convert percentage into a decimal, convert decimal into percentage, convert decimal into fraction and convert into fraction into decimal? —Preceding unsigned comment added by 74.14.118.34 (talk) 16:20, 16 February 2010 (UTC)
- Try one of these, perhaps: [2] [3] [4]. Or just search for it. —Bkell (talk) 16:56, 16 February 2010 (UTC)
- convert fraction into decimal - Divide numerator by denominator. ex: 3/4 = 3 divided by 4 --> .75
- convert decimal into percentage - Multiply the number by 100. ex: .75 * 100 = 75%
- convert fraction into a percentage - Convert fraction to decimal, then decmial to percentage. ex: 3/4 = .75. .75 * 100 = 75%
- convert decimal into fraction - in the simplest terms, just put the decimal as the numerator. ex: .341 = .341/1. This can then (sometimes) be simplified. ex: .5 = .5/1. .5/1 * (2/2) = 1/2, but sometimes the decimal over 1 is the best you can do.
- convert percentage into a fraction - Divide percentage by 100 ex: 75% = 75/100
- convert percentage into a decimal - convert percentage to fraction, then convert fraction to decimal. ex: 75% = 75/100 = .75
Though I'm sure some of those sites describe it better. Hope this helps! Chris M. (talk) 17:54, 17 February 2010 (UTC)
Least squares and oscillating solutions
Hi all,
I'm running a least squares algorithm for multilateration compensating for error and noise, meaning I have an overdetermined system. The algorithm is on this paper.
The target is the middle black dot (0.6,0.8) and there are 4 sensors in a square from (0,0) to (1,1). The circles indicate the samples from the sensors, sampled from a normal distribution with mean the distance between the sensor and target and common variance.
In this scenario the solver oscillates between the two solutions and (after 10,000 iterations at least). I suspect it may be because the fourth circle doesn't intersect with any of the others so there are multiple solutions.
Is there a way to obtain a solution that converges to a single solution when such a scenario arises? Is it even meaningful?
Thanks in advance. x42bn6 Talk Mess 16:29, 16 February 2010 (UTC)
- Two possibilities emerge to explain your oscillation: (1) failure to converge for numerical reasons (i.e., incorrectly implemented or inefficiently slow converging LSQR algorithm); and/or (2) failure to converge because of physical problem setup (i.e., actual existence of two global minima).
- If you have four receivers, you have an overdefined system. This means you have a null space which defines that set of solutions that are least incorrect - and you are solving for the least squares error. I think the solution should be well-defined - a single, unique point with minimal error. LSQR should be navigating that nullspace to converge at a specific point. One way to tell if the issue is your descent algorithm or your problem formulation is to print the value of the error (residual) at every iteration. Is it decreasing, or is it remaining the same when you iterate up to 10,000 steps? If it continues to decrease, you have a slow convergence, and you might switch to a better solver (such as a conjugate gradient solver). If the residual is not decreasing, you have identified the null-space of your physical problem, and must specify some other stop-condition (e.g. maximum iteration number) or redefine the error-criteria for your physical problem. Sometimes in such source-location-detection problems, your setup is symmetric - meaning that there is an inherent ambiguity between (for example) sources directly in front and sources directly behind your receiver array. Do your S1 and S2 solutions show some kind of physical symmetry like that? Nimur (talk) 17:00, 16 February 2010 (UTC)
- Another thing to check - are you using double-precision or single-precision math? In practice, a lot of oscillation and instability is caused by machine roundoff. This can occur even if you have the appropriate accuracy in single-precision to describe your data - but your residual may be inaccurately calculated, screwing the algorithm up. Nimur (talk) 17:02, 16 February 2010 (UTC)
- I'm using double-precision, Java doubles. The errors where c is the current "guess" oscillate between 0.246303038971375.. and 0.253732400873977.. but don't seem to go down or converge. I may have simply programmed it wrong.
- Except for these irritating cases I get something which is sensible. A 2D histogram gives a normal distribution-like 3D surface centred about the true position, so I know the algorithm works in some sense. x42bn6 Talk Mess 17:14, 16 February 2010 (UTC)
- Hm. I'll take a look at that paper in a little more detail to decipher the algorithm and see if I can't spot an obvious trouble-spot. Have you tried other simulation inputs and obtained the same oscillation? Nimur (talk) 22:50, 16 February 2010 (UTC)
- Another thing to check - are you using double-precision or single-precision math? In practice, a lot of oscillation and instability is caused by machine roundoff. This can occur even if you have the appropriate accuracy in single-precision to describe your data - but your residual may be inaccurately calculated, screwing the algorithm up. Nimur (talk) 17:02, 16 February 2010 (UTC)
Mathmatical formula for infinity symbol
I have a 3D application that allows me to draw different waveforms with the variables X(t),Y(t), and Z(t). I can also plug numbers into Tmix and Tmax. Based on this, how could I draw a figure-8 infinity symbol? (Googling 'infinity symbol math formula' doesn't give me anything relevant. --70.167.58.6 (talk) 17:51, 16 February 2010 (UTC)
- It's called a Lemniscate. Black Carrot (talk) 18:21, 16 February 2010 (UTC)
- Mathworld has the parametric equations for one version of the lemniscate. Black Carrot (talk) 18:22, 16 February 2010 (UTC)
Alternate Law of Cosines
1.) c2 = a2 + b2 -2ab cosC,
2.) a2 = b2 + c2 -2bc cosA,
3.) b2 = a2 + c2 -2ac cosB,
Add 2.) and 3.):
a2 + b2 = a2 + b2 +2c2 -2bc cosA -2ac cosB.
Terms in a2 and b2 cancel, leaving
2c2 -2bc cosA -2ac cosB = 0.
Divide out the common factor 2c and move terms:
4.) c = a cosB + b cosA, similarly:
5.) b = a cosC + c cosA,
6.) a = b cosC + c cosB.
Are not equations 4, 5 and 6 alternative forms of the law of cosines? Should they not be mentioned in the article on that law? —Preceding unsigned comment added by 71.105.162.193 (talk) 17:54, 16 February 2010 (UTC)
- I thought the law of cosines was most useful when you knew the lengths of two sides and the angle between them. If you know two sides (WLOG a and b) and two angles (WLOG A and B), then the sine rule is the fastest, using 180°=A+B+C. x42bn6 Talk Mess 18:01, 16 February 2010 (UTC)
Glancing at it for a few seconds, it looks correct. Whether the law of cosines can be deduced from these identities is another question. What else besides that can be deduced from them is yet another; in particular, might one use them in solving triangles or for other purposes? Michael Hardy (talk) 18:14, 16 February 2010 (UTC)
- It's that they contain redundant information: If you know e.g. two of the angles A and B you can deduce the third and only need to know one side to work out all the others. Not only is this wasteful (you have to do extra work measuring four things) but is also more complex as how do you deal with e.g. the values not all agreeing because you measure angles better than lengths? You can do extra calculations to be sure they agree, but then you've done at least as much work as the original formula.--JohnBlackburnewordsdeeds 18:18, 16 February 2010 (UTC)
- Equation 4 can be deduced immediately by drawing the perpendicular from C to c, and similarly for the other equations. But there is a proof of the law of cosines here if you take your derivation in reverse.
Here's another way to derive it. Recall that the law of sines says that
so that for some constant d we must have
(the constant of proportionality, d, is actually the diameter of the circumscribed circle, but we won't need that fact here). So then
and since A + B + C = half-circle, it follows that sin(A + B) = sin(C). Consequently
and hence
Michael Hardy (talk) 23:53, 16 February 2010 (UTC)
One last question on principal ideals
Hi there everyone,
Another one from me, this is my last question on Ring theory for the fortnight (hooray) and it's a toughie unfortunately.
Let F be a field, and let R=F[X,Y] be the polynomial ring in 2 variables. i) Let I be the principal ideal generated by the element X-Y in R. Show ii) What can you say about R/I when I is the principal ideal generated by ? iii) [Harder] What can you say about R/I when I is the principal ideal generated by ?
The first was fine, I used the isomorphism theorem for rings with the homomorphism taking f(x,y) to f(x,x), which has kernel (X-Y), and for the second I took the same approach, setting f(x,y)=f(x,-x^2) - I got the same solution as (i), is that right? It's the third part of the question I'm having the problems with, since obviously we can't just set , because that has ambiguities, and x=y or x=-y aren't valid either. Could anyone suggest what I should do to evaluate R/I in this case?
Thanks very much all :) 82.6.96.22 (talk) 21:43, 16 February 2010 (UTC)
- You'll have to decide what you mean by "evaluating" R/I. It's certainly not isomorphic to k[x]. Try writing down a basis for R/I as an F-vector space, that might give you a feel for what the ring is like Tinfoilcat (talk) 23:27, 16 February 2010 (UTC)
- Well, if I could find an obvious analogue to F[x] to which R/I would be isomorphic, then that would be great, but I'm not sure whether one actually exists - I'll try and fathom out a basis and see if I can get anywhere with it. 82.6.96.22 (talk) 08:28, 17 February 2010 (UTC)
(Consider the following in the context of (iii); it may provide a different way of analyzing the problem) Let and let , and consider the cosets and in . Note that , , and . Consider the ring . PST 11:08, 17 February 2010 (UTC)
- In other words, use the ring version of the Chinese remainder theorem.—Emil J. 12:26, 17 February 2010 (UTC)
- It's not really the CRT, which only applies if the two ideals are coprime. There's something slightly different going on though: what you get, as PST observes, is a fibre product (pullback). This example should be an instance of a more general result about fibre products, something like: let R be a ring with max ideal m such that R/m=F. Let I,J be ideals such that I+J=m. Then R/IJ is iso to the fibre product of R/I and R/J 129.67.37.143 (talk) 21:16, 17 February 2010 (UTC)
LaTeX question, columns
I want to type up a list of derivatives and integrals, just like the table that is in the front or back cover of a lot of calculus textbooks. That is, a one page sheet, one-sided sheet with just the standard derivatives and integrals, up through arcsine and such, but not hyperbolic functions and their inverses. Any way, what I have in mind is Derivatives on top with 2 columns and then Integrals on the bottom half with 2 columns. Does any one know a good way to do this? I used something called "multicols" but it doesn't line up well. That is, if some formula is taller than others, the two columns don't line up well. I want 1. to be in the exact same vertical position as 10 (or whatever number), then 2 and 11, 3 and 12, and so on. Any help would be appreciated. Thanks. NumberTheorist (talk) 22:29, 16 February 2010 (UTC)
- Hmm, I guess I could just do a table. That might work nicely and it's simple. Well, if you have any great ideas, I'd love to hear them but otherwise I'll just do a table. NumberTheorist (talk) 22:50, 16 February 2010 (UTC)
- I am not sure why you'd want to align rows exactly, especially if the formulas had very different heights, but that is your call. A table would do that. You can always permute the formulas to approximately match heights within each row. Baccyak4H (Yak!) 19:25, 17 February 2010 (UTC)
- It's not that they have very different heights. It's that they have slightly different heights and most are a standard height. But, with multicols, a few with slightly different heights adds up and then the rows aren't lined up well at all. Also, in this case, I want the formulas in a certain order that makes sense, like all the trig derivatives will be in a group at the bottom and sin and cos go first as far as those go, and other things like this. I think a table is a perfect idea and I don't know why I didn't think of it sooner. NumberTheorist (talk) 20:52, 17 February 2010 (UTC)
- I am not sure why you'd want to align rows exactly, especially if the formulas had very different heights, but that is your call. A table would do that. You can always permute the formulas to approximately match heights within each row. Baccyak4H (Yak!) 19:25, 17 February 2010 (UTC)
February 17
Proof for this example on the Lambert W function?
The Lambert W function has several examples, but only has proof for the first one.
Does anyone have a proof for example 3? —Preceding unsigned comment added by Luckytoilet (talk • contribs) 05:05, 17 February 2010 (UTC)
- By continuity of exponentiation, the limit c satisfies c = zc = ec log z. Rearranging it a bit gives (−c log z)e−c log z = −log z, thus W(−log z) = −c log z, and c = W(−log z)/(−log z). Not quite sure why the example talks about the "principal branch of the complex log function", the branch of log used simply has to be the same one as is employed for the iterated base-z exponentiation in the definition of the limit. Also, note that the W function is multivalued, but only one of its values can give the correct value of the limit (which is unique (or nonexistent) once log z is fixed).—Emil J. 15:04, 17 February 2010 (UTC)
Follow up: the name for the argument of the logarithmic function
When reading the exponential term ax, one can say "exponentiation - of a - to the exponent n". However, one can also use the explicit name "base" for a, and say: "exponentiation - of the base a - to the exponent n". My question is about whether one can also use any explicit name for x - when reading the logarithmic term logax, i.e. by saying something like: "logarithm - of the blablabla x - to the base a"... HOOTmag (talk) 08:02, 17 February 2010 (UTC)
- I would reckon a correct term would be argument (but this is quiet general as it would apply to any such function/monomial operator). Also note it would most likely be read as "logarithm base a of the argument x" A math-wiki (talk) 08:56, 17 February 2010 (UTC)
- Why have you posted this again? There is an ongoing discussion above. I suggest that the term for the argument of log is just argument and that it's most sensible to say "the logarithm of x to the base a". Why much there be a technical term? —Anonymous DissidentTalk 09:01, 17 February 2010 (UTC)
@A math-wiki, @Anonymous Dissident: Sorry, but just as the function of exponentiation has two arguments: the "base", and the "exponent", so too does the function of logarithm have two arguments: the "base", and the other argument (whose name is still unknown), so I can't see how the term "argument" may solve the problem, without a confusion. The problem is as follows: does the function of logarithm have a technical term for the second argument (not only for the first one), just as the function of exponentiation has a technical term for the second argument (not only for the first one)? HOOTmag (talk) 14:27, 17 February 2010 (UTC)
- I believe you have got your answer. No it has no special name that anyone here knows of. The closest you'll come to a name is argument. Dmcq (talk) 15:05, 17 February 2010 (UTC)
- If you've read my previous section, you've probably realized that the term "argument" can't even be close to answering my question. Also note that I didn't ask whether "it has a special name that anyone here knows of", but rather whether "it has a special name", and I'll be glad if anybody here know of such a name, and may answer me by "yes" (if they know that there is a special name) or by "no" (if they know that there isn't a special name). HOOTmag (talk) 17:50, 17 February 2010 (UTC)
- Er, the "that anyone here knows of" part is inherent in the process of answering questions by humans. People cannot tell you about special names that they do not know of, by the definition of "know". If you have a problem with that, you should ask at the God Reference Desk rather than the Wikipedia Reference Desk.—Emil J. 18:02, 17 February 2010 (UTC)
- If you answer me "I don't know of a special name", then you've replied to the question "Do you know of a special name". If you answer me: "Nobody here knows of a special name", then you've replied to the question: "Does anyone here know of a special name". However, none of those questions was my original question, since I'm not interested in knowing whether anyone here knows of a special name, but rather in knowing whether there is a special name. I'll be glad if anybody here know of such a name, and may answer me by: "yes, there is" (if they know that there is a special name) or by: "no, there isn't" (if they know that there isn't a special name). HOOTmag (talk) 18:22, 17 February 2010 (UTC)
- No one can positively know that there isn't a name. You can't get a better answer than what Dmcq wrote (unless, of course, there is such a name after all).—Emil J. 18:33, 17 February 2010 (UTC)
- I can positively know that there is a special name for each argument of the function of exponentiation (the special names are "base" and "exponent"), and I can also positively know that there isn't a special name for the argument of functions having exactly 67 elements in their domain. HOOTmag (talk) 18:49, 17 February 2010 (UTC)
- Trying to dictate to a reference desk how they should reply to you is not a good idea if you want answers to further questions. Dmcq (talk) 19:44, 17 February 2010 (UTC)
- Dictate? never! I've just said that any answer like "Nobody here knows of a special name" - doesn't answer my original question, which was not: "Does anyone here know of a special name", but rather was: "Is there a special name". As I've already said: "I will be glad if anybody here know of such a name, and may answer me by 'YES' (if they know that there is a special name) or by: 'NO' (if they know that there isn't a special name)".
- Note that - to be "glad" - doesn't mean: to try to dictate... HOOTmag (talk) 20:31, 17 February 2010 (UTC)
- Trying to dictate to a reference desk how they should reply to you is not a good idea if you want answers to further questions. Dmcq (talk) 19:44, 17 February 2010 (UTC)
- I can positively know that there is a special name for each argument of the function of exponentiation (the special names are "base" and "exponent"), and I can also positively know that there isn't a special name for the argument of functions having exactly 67 elements in their domain. HOOTmag (talk) 18:49, 17 February 2010 (UTC)
- No one can positively know that there isn't a name. You can't get a better answer than what Dmcq wrote (unless, of course, there is such a name after all).—Emil J. 18:33, 17 February 2010 (UTC)
- If you answer me "I don't know of a special name", then you've replied to the question "Do you know of a special name". If you answer me: "Nobody here knows of a special name", then you've replied to the question: "Does anyone here know of a special name". However, none of those questions was my original question, since I'm not interested in knowing whether anyone here knows of a special name, but rather in knowing whether there is a special name. I'll be glad if anybody here know of such a name, and may answer me by: "yes, there is" (if they know that there is a special name) or by: "no, there isn't" (if they know that there isn't a special name). HOOTmag (talk) 18:22, 17 February 2010 (UTC)
- Er, the "that anyone here knows of" part is inherent in the process of answering questions by humans. People cannot tell you about special names that they do not know of, by the definition of "know". If you have a problem with that, you should ask at the God Reference Desk rather than the Wikipedia Reference Desk.—Emil J. 18:02, 17 February 2010 (UTC)
- If you've read my previous section, you've probably realized that the term "argument" can't even be close to answering my question. Also note that I didn't ask whether "it has a special name that anyone here knows of", but rather whether "it has a special name", and I'll be glad if anybody here know of such a name, and may answer me by "yes" (if they know that there is a special name) or by "no" (if they know that there isn't a special name). HOOTmag (talk) 17:50, 17 February 2010 (UTC)
First/second order languages
- How should we call a first/second order language, whose all symbols are logical (like connectives quantifiers variables and brackets and identity), i.e. when it contains neither constants nor function symbols nor predicate symbols (but does contain the identity symbol)?
- If a given open well-formed formula contains signs of variables ranging over individuals, as well as signs of variables ranging over functions, while all quantifications are used therein over variables ranging over individuals only (hence without quantifications over variables ranging over functions), then: is it a first order formula, or a second order formula?
- Note that such open formulae can be used (e.g.) for defining correspondences (e.g. bijections) between classes of functions (e.g. by corresponding every invertible function to its inverse function).
HOOTmag (talk) 17:53, 17 February 2010 (UTC)
- I'm a novice so I'm not sure if my answers are correct. Your 1st question: if there are no predicate symbols, there would be no atomic formulas and hence no wfs. Your 2nd question: it a second order formula because first order can only have variables over the universe of discourse. Your note: I think function are coded as sets in set theories, hence defining a bijection would be a 1st order formula because variables/quantifies are over sets (the objects in our domain). Money is tight (talk) 18:12, 17 February 2010 (UTC)
- There are plenty of formulas in languages without nonlogical symbols. In first-order logic, apart from , (if they are included in the particular formulation of first-order logic) you have also atomic formulas using equality, and therefore the language is sometimes called the "language of pure equality". In second-order logic, there are also atomic formulas using predicate variables. One could probably call it the language of pure equality as well, but there is little point in distinguishing it: any formula in a richer language may be turned into a formula without nonlogical symbols by replacing all predicate and function symbols with variables (and quantifying these away if a sentence is desired). As for the second question, the formula is indeed a second-order formula, but syntactically it is pretty much indistinguishable from the first-order formula obtained by reinterpreting all the second-order variables as function symbols.—Emil J. 18:28, 17 February 2010 (UTC)
- The atomic formula "x=x" has no non-logical predicate symbols (note that the identity sign is logical): all of its symbols are logical (including the symbol of identity)
- Note that the universe of discourse is the set of individuals and of functions ranging over those individuals.
- HOOTmag (talk) 18:35, 17 February 2010 (UTC)
- A first order theory with equality is one that has a predicate symbol that also has axioms of reflexivity and substitutivity. I'm not sure why you say x=x has no non logical symbols. Clearly the only logical connectives are for all, there exist, not, or, and, material implication. And the universe of discourse only contains the individuals D in our question, not functions with domain D^n. The functions are called 'terms', which are used to build atomic formulas and then wfs. Money is tight (talk) 18:43, 17 February 2010 (UTC)
- In first-order logic with equality, the equality symbol is considered a logical symbol, the reason being that its semantics is fixed by the logic (in a model you are not allowed to assign it a binary relation of your choice, it's always interpreted by the identity relation). Anyway, the OP made it clear that he intended the question that way, so it's pointless to argue about it.—Emil J. 19:24, 17 February 2010 (UTC)
- A first order theory with equality is one that has a predicate symbol that also has axioms of reflexivity and substitutivity. I'm not sure why you say x=x has no non logical symbols. Clearly the only logical connectives are for all, there exist, not, or, and, material implication. And the universe of discourse only contains the individuals D in our question, not functions with domain D^n. The functions are called 'terms', which are used to build atomic formulas and then wfs. Money is tight (talk) 18:43, 17 February 2010 (UTC)
Homomorphism
I'm looking for two homomorphisms f:S2->S3, g:S3->S2 such that the composition gf is the identity on S2 (the S means the 2nd and 3rd symmetric groups). Does two such homomorphism exist? I know everything is finite so I can brute force my way but I don't like that approach. Thanks Money is tight (talk) 18:00, 17 February 2010 (UTC)
- If gf is the identity, then g is a surjection. Thus its kernel must be a normal subgroup of index 2. Can you find one? This will give you g, and then constructing the matching f should be easy.—Emil J. 18:14, 17 February 2010 (UTC)
Ominus
What's the conventional meaning and usage of the symbol encoded by the LaTeX markup "\ominus"? (i.e. ). There doesn't currently seem to be a ominus Wikipedia page yet. -- 140.142.20.229 (talk) 18:50, 17 February 2010 (UTC)
- Mainly this: if are closed linear subspaces of a Hilbert space, denotes the orthogonal subspace of relative to , that is . It comes of course from the notation for the orthogonal sum, As you see, it doesn't seem so theoretically relevant to deserve an article of its own; but as a notation is nice and of some use. --pma 19:08, 17 February 2010 (UTC)
- It is also used in loads of other places like removing parts of a graph or when reasoning about computer floating point where there is a vaguely subtraction type of operator and the person wants a symbol for it. Basically a generally useful extra symbol. Dmcq (talk) 19:36, 17 February 2010 (UTC)
Spherical harmonic functions
I'm looking for the normal modes of a uniform sphere. I have a classic text on solid mechanics, by A. E. H. Love, but I can't quite make sense of the math. He gives a formula for the mode shape in terms of "spherical solid harmonics" and "spherical surface harmonics," which he uses and discusses in a way that doesn't seem to match Wikipedia's description of spherical harmonics. Can you help me identify these functions? The following facts seem to be important:
- The general case of a "spherical solid harmonic" is denoted . Note the presence of only one index, rather than two.
- , where is a "spherical surface harmonic." I would expect S to be equivalent to Y, except that it's missing one index.
- Unlike the regular spherical harmonics Y, the "spherical solid harmonics" V apparently come in many classes, three of which are important to his analysis and denoted , , and . The description of the distinction between these classes makes zero sense to me.
- Several vector-calculus identities involving V are given. I can type these up if anyone wants to see them.
Does anyone know what these functions V or S are? --Smack (talk) 18:59, 17 February 2010 (UTC)
- S is surely just Y with and the m index suppressed (perhaps azimuthal variations are less important here?), which makes V a (regular) solid harmonic R with the same index variations. I don't know what the classes of solid harmonics are supposed to be, unless he's denoting the different values of m as different classes (0 and ±1, or 0/1/2?). --Tardis (talk) 20:51, 17 February 2010 (UTC)
math conversion two
is there a website where I can convert litre into mililitre, convert litre into pint, convert into gallon, convert litre into kilogram, litre into decalitre and such? —Preceding unsigned comment added by 74.14.118.209 (talk) 20:22, 17 February 2010 (UTC)
- Google will do most of that for you, e.g. type "2 litres in pints". Litre to kilogram though would be dependent on the density of what you are measuring. Note that some of your conversions are simply multiplication or division based on the prefix (litre to millilitre, for example). --LarryMac | Talk 20:32, 17 February 2010 (UTC)
Minkowski paper on L1 distance
I know that L1 distance is often referred to as Minkowski distance. I'm trying to find out where (in which paper/book) did Minkowski introduce L1 distance. I can only find many references stating that he introduced the topic, but no references to a specific paper or book. Does anyone here know the name of the paper/book? -- kainaw™ 20:59, 17 February 2010 (UTC)