Wikipedia:Reference desk/Mathematics
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
September 12
Laplace's Equation
So, I have a function satisfying in V and on S where V is a region in and S is the bounding surface and I have to prove that everywhere in V. Not sure what I've done in airtight so just want to check it here. Basically, I've used a combination of Green's First and Second Identities to deduce that and since there are no restrictions on , it is a completely arbitrary function and so the only way this is always 0 is if . Is this a solid argument or does it need tightening up, or is it just plain wrong? Thanks asyndeton talk 13:34, 12 September 2010 (UTC)
- I'm not sure I follow your argument. The integral is over S, but we want to know what φ does on V. We already know φ is zero on S. Rckrone (talk) 15:04, 12 September 2010 (UTC)
- You're right, that was a stupid thing to say. The only other point of significance I can see in my, almost indecipherable, working is that . This seems like an even weaker argument but can you again apply the arbitrariness of and deduce that must be 0? asyndeton talk 15:11, 12 September 2010 (UTC)
- Your argument based on the `arbitrary' nature of seems shaky to me. The function is a solution of Laplace's equation, so it is by definition a Harmonic function, which is a fairly restricted class of functions that have some nice properties. You can probably use Harmonic_function#Maximum_principle to get your result, but our article doesn't seem to mention how this fact is derived.SemanticMantis (talk) 16:09, 12 September 2010 (UTC)
- It seems clear why it works: ∇φ is divergence free in V so if it's non-zero somewhere we can consider the curve σ that follows the direction of ∇φ, along which φ is strictly increasing. σ can't be a closed curve so it has to meet S at each end and then φ along σ must have a strict maximum at one of those end points. If φ=0 on S, then this would be a contradiction and so ∇φ is uniformly zero on V and φ is constant and so zero. That doesn't really seem rigorous though. I'm sure there must be some way to use the vector calculus theorems to make this argument succinctly but I'm very bad at vector calculus. Rckrone (talk) 16:46, 12 September 2010 (UTC)
- Thanks for your answer. I'll have to think on this for a while because it's slightly more sophisticated than I can cope with at first sight! asyndeton talk 09:00, 13 September 2010 (UTC)
- It seems clear why it works: ∇φ is divergence free in V so if it's non-zero somewhere we can consider the curve σ that follows the direction of ∇φ, along which φ is strictly increasing. σ can't be a closed curve so it has to meet S at each end and then φ along σ must have a strict maximum at one of those end points. If φ=0 on S, then this would be a contradiction and so ∇φ is uniformly zero on V and φ is constant and so zero. That doesn't really seem rigorous though. I'm sure there must be some way to use the vector calculus theorems to make this argument succinctly but I'm very bad at vector calculus. Rckrone (talk) 16:46, 12 September 2010 (UTC)
Polynomial division
Hi. I've got a puzzle that asks me to divide by and find the remainder. What is a concise way of doing that? Obviously polynomial long division and even synthetic division won't help much here. PS: I might have made a minor error in the coefficients or terms, since I'm recalling the puzzle by memory. If I seem to be you can make slight alterations, but I know for sure that the dividend is a trinomial with a first term with a huge exponent, a linear second term, and a constant third term, and that the divisor is the difference between a quadratic first term and a constant. Thanks. 24.92.78.167 (talk) 17:17, 12 September 2010 (UTC)
- We have , and we want to find the rational numbers a and b, preferably without finding q. Try substituting appropriate values for x. Algebraist 17:26, 12 September 2010 (UTC)
- You can do it in you head by working mod . Basically just pretend x2 is equal to 3, so x100 is equal to 350. That reduces the expression to linear and you're done.--RDBury (talk) 18:42, 12 September 2010 (UTC)
- If any further hint is needed, notice that what RDBury is saying is really the same thing as what Algebraist is saying. But if you have further questions, keep talking. Michael Hardy (talk) 22:54, 12 September 2010 (UTC)
- You can do it in you head by working mod . Basically just pretend x2 is equal to 3, so x100 is equal to 350. That reduces the expression to linear and you're done.--RDBury (talk) 18:42, 12 September 2010 (UTC)
I had some time and started the long division looking for a pattern, and I found that the exponent decreases by two every time and the coefficient increases one exponent. However I'm intrigued by RDBury's comment that 'you can do it in your head by working mod x^2-3'. What is meant by this? Why is it possible to pretend x^2 is equal to 3? Thanks. 24.92.78.167 (talk) 01:05, 13 September 2010 (UTC)
- So the basic idea is that to find the remainder of when divided by , we want to throw out anything that's a multiple of and see what's left. One way to do this is to say that now we are operating in an environment where is congruent to zero, and any multiple of it is zero as well. We can write this Now any two polynomials are congruent if their difference is a multiple of So for example since their difference is and since their difference is , which is a multiple of . In particular we have which is to say that in this environment and 3 are interchangeable. So we can freely turn all the s into 3s to get our polynomial into a congruent one of the form we want, namely one with only a linear and constant term. This congruent polynomial differs from by a multiple of so it's the remainder we want. This idea is pretty central to dealing with quotient rings of polynomial rings but that might not mean much if you haven't seen any abstract algebra. Modular arithmetic is also relevant, although note that most of that article deals with setting some number congruent to zero while here we are setting a polynomial congruent to zero. Still the idea is the same. Rckrone (talk) 01:46, 13 September 2010 (UTC)
Summation Convention
Following on from my question on LaPlace's equation, I have to use Cartesian coordinates to show that for a function , which depends only on the radial coordinate
and
Normally, I can work quite well with the summation convention and the first falls quite nicely into place but I seem to have a fundamental problem when it comes to finding the second one. If I show my working can someone point out the mistake? Caveat lector, my chain rule and use of total/partial derivatives may be a bit sloppy.
Where's it going wrong? Thanks asyndeton talk 20:02, 12 September 2010 (UTC)
- Here's how it should go:
- I couldn't figure out what the mistake was for the longest time until I realized the formula in question had to be dimension dependent. Specifically in Rn and so the derivation needed to reflect that somehow. Rckrone (talk) 04:34, 13 September 2010 (UTC)
- Urgh, now you say it, it's so obvious that I should have had a in there. My punishment for trying to be clever and skipping steps out. Many thanks. asyndeton talk 08:58, 13 September 2010 (UTC)
Probability Question
I am usually good in solving probability problems, but I stumbled upon the following question online and I have no idea how to approach it.
"You pick two cards (one after the other with no replacement) from a deck of 52 playing cards. What is the probability that the second card has a higher rank than the first one".
Please note that I am curious how to approach such a problem - I am not interested in a numerical answer.
Thanks Help Desk - where we get a chance to brush up on our math, after graduating from college :)
78.101.151.192 (talk) 20:44, 12 September 2010 (UTC)
- Looks like 24/51. For the lowest ranked card, the chance of something higher is 48/51. For the highest ranked card, chance of higher is 0/51. For a mid-ranked card it would be 24/51, and as it's linear, that would be the overall probability. David Biddulph (talk) 21:12, 12 September 2010 (UTC)
- First, compute the probability of drawing two cards of equal rank. Then compute the conditional probability of the second card having a higher rank under the contition that the ranks are not equal. Then combine these using Bayes' theorem. – b_jonas 21:50, 12 September 2010 (UTC)
- I do not understand how Bayes' theorem comes in. There is a 48/51 chance of drawing two cards with unequal rank, and if you draw two cards with unequal rank then there's a 1/2 chance the first one will be lower, since it's completely symmetric among the two. Then you multiply. Rckrone (talk) 22:54, 12 September 2010 (UTC)
- Rckrone: You're right, it's not Bayes' theorem, but the Law of total probability. Sorry. – b_jonas 11:26, 13 September 2010 (UTC)
- I do not understand how Bayes' theorem comes in. There is a 48/51 chance of drawing two cards with unequal rank, and if you draw two cards with unequal rank then there's a 1/2 chance the first one will be lower, since it's completely symmetric among the two. Then you multiply. Rckrone (talk) 22:54, 12 September 2010 (UTC)
- Generally, to approach such a problem, begin with a similar problem which can be solved by counting rather than by computing. Solve this one: "You pick two cards (one after the other with no replacement) from a deck of 8 playing cards. What is the probability that the second card has a higher rank than the first one". Draw an 8 by 8 table like a chessboard where the rows represent the first card (i) and the columns represent the second card (j). Each of the 64 fields represent the drawing of two cards. The 8 fields i=j are discarded because the same card cannot be picked twice. The 28 fields in the upper right triangle represent that the second card has a higher rank than the first one, i<j. The 28 fields in the lower left triangle represent that the second card has a lower rank than the first one, i>j. (Assuming that different cards have different rank. I am not sure what you mean by the rank of a card. If different cards may have equal ranks, then mark the corresponding fields on the chessboard). The probability you are seeking is the number of favorable cases divided by the number of possible cases. Now you can approach your original problem. Bo Jacoby (talk) 07:44, 13 September 2010 (UTC).
September 13
exponents
Hi, I am supposed to solve this and show it in exponential form
(3x)^2 (9x^-1)
So, so far I have done is:
9x^2 (9x^-1)
9x^2 (1/9x)
x (1/1)
x
But I have a feeling I didn't do this right, since I'm supposed to answer it in an "exponential" form. Can anyone help please? --174.5.131.59 (talk) 04:12, 13 September 2010 (UTC)
- Is the expression (3x)2(9x)-1 or (3x)2(9x-1)? If it's the second one, then you should check carefully what you did. Rckrone (talk) 04:40, 13 September 2010 (UTC)
Thanks for showing me the superscripts! It is the second one, (3x)2(9x-1) I did check carefully over and over but the solution seems to be escaping me. Can you give me some tips please? --174.5.131.59 (talk) 05:18, 14 September 2010 (UTC)
- In 9x-1, the exponent is only applying to the x, not to the 9. In your second line, you applied it to the 9.--203.97.79.114 (talk) 05:22, 14 September 2010 (UTC)
- Yes abc means a(bc), not (ab)c. E=mc2 means E=m(c2)=c2m. It is common practice to put the constant, which here is c2, before the variable, which here is m, but Einstein did not do that in this case, and everybody follow Einstein and write E=mc2. Note the silly exception: cm2 means square centimeter, not centi squaremeter, so cm2 = (cm)2. Bo Jacoby (talk) 06:03, 14 September 2010 (UTC).
Inverse function
How do I invert
to get —Preceding unsigned comment added by 118.208.129.11 (talk) 08:58, 13 September 2010 (UTC)
- First step is to swap the x's and y's. Then make y the subject of the formula by multiplying out by the denominator and grouping like terms. Can't give any more hints than that without actually doing it for you. Ask if you're still stuck. Zunaid 10:21, 13 September 2010 (UTC)
Be careful with notation. y−1 usually means the reciprocal of y. But apparently that's not what you mean here. Michael Hardy (talk) 03:53, 14 September 2010 (UTC)
Math
I'm not so good at maths. It math something you learn, or something you're always good at? Mediaiiwk (talk) 16:09, 13 September 2010 (UTC)
- It is something you learn, and then you enjoy it, and so you learn even more, and so on. Be patient and have fun doing math, then you too will be good at it. Bo Jacoby (talk) 16:31, 13 September 2010 (UTC).
- Agree with Bo: the biggest obstacle to learning math, for most people, is that they are afraid they won't be good at it. You just have to remember that no one is good at it to start, but if you keep working at it you will get it, and the moment you get it is a really satisfying feeling. --Ludwigs2 19:20, 13 September 2010 (UTC)
- It is similar to the situation with English literature. There are many writers who try very hard and go to all sorts of courses about good writing but only a small percentage write anything you'd want to read. And even if you're content to read novels rather than try and write them there are some that can be quite heavy going and yet are acknowledged as great literature. And then there's people who don't read. And then there's people who can't read and even with good help have trouble getting what they want in a supermarket. There's quite a few good books in popular maths that are accessible to most people with some interest like novels are to those who like reading. And like writing reasonable English so you can say what you think is helped by a bit of training so most people could do better with their maths and get their way past interest rates, builders estimates,, sizing a room for painting and suchlike tasks with reasonable ease. Dmcq (talk) 10:11, 14 September 2010 (UTC)
September 14
Vector as an integrand
I'd like to know is it possible for the integrand to be a vector value? for example ∬n dσ , where n is a unit normal vector. Re444 (talk) 12:44, 14 September 2010 (UTC)
- Absolutely it's possible to integrate vectors. I'd only be guessing at what your notation means, though. Why is it a double integral? What are you integrating over? n is normal to what? And what's dσ?
- But if I had to guess, I'd guess that you're integrating over a surface, and dσ is an element of area. Sure, you can integrate that. I probably wouldn't write it as a double integral, though, because that suggests that you're doing two integrations, whereas here you're doing only one (albeit in two dimensions). --Trovatore (talk) 09:54, 14 September 2010 (UTC)
- Yes in my example integration is over a surface in 3D. Till now I've ran into some vector integral equations whose integrands reduce into scalars, like divergence and stokes theorem. In the case of vector integrands, where usually they appear and how we should deal with? Treating by decomposing them to their components? And one more question: I’ve read in a fluid mechanics text that according to Gauss theorem, my example integral, for any surface equals zero. But the integrand in the Gauss theorem is a scalar not a vector. Re444 (talk) 12:44, 14 September 2010 (UTC)
- Certainly you can decompose to components: if , then by linearity. I don't immediately see how Gauss' theorem applies to the case of integrating a closed surface's normal vector, but it does seem that it should be by a projection argument (per dimension). --Tardis (talk) 14:46, 14 September 2010 (UTC)
- Something like:
- But can it be done without decomposing the vectors? -- 1.46.187.246 (talk) 16:31, 14 September 2010 (UTC)
- I think the obvious generalization of the definition of the Riemann integral should work in this context, without ever having to give your vectors a coordinate system. There might be some picky details to work out as to what is meant by "refinement", but I expect it should all work.
- If you want to do the (mostly) more general Lebesgue integrals, I expect that to get even pickier, but again, still work. --Trovatore (talk) 18:10, 14 September 2010 (UTC)
- Beautiful! thanks everybody. Re444 (talk) 10:19, 15 September 2010 (UTC)
- Certainly you can decompose to components: if , then by linearity. I don't immediately see how Gauss' theorem applies to the case of integrating a closed surface's normal vector, but it does seem that it should be by a projection argument (per dimension). --Tardis (talk) 14:46, 14 September 2010 (UTC)
- Yes in my example integration is over a surface in 3D. Till now I've ran into some vector integral equations whose integrands reduce into scalars, like divergence and stokes theorem. In the case of vector integrands, where usually they appear and how we should deal with? Treating by decomposing them to their components? And one more question: I’ve read in a fluid mechanics text that according to Gauss theorem, my example integral, for any surface equals zero. But the integrand in the Gauss theorem is a scalar not a vector. Re444 (talk) 12:44, 14 September 2010 (UTC)
Vector Problem
Hi. If I have two known vectors, n1 and n2, and I know scalars d1 and d2, can I find a given the following equations: dot( a, n1 ) = d1 and dot( a, n2 ) = d2.
I feel that this should be very simple - perhaps by taking the dot product of each equation?
Incidentally, it is to find the perpendicular distance from the origin to a line intersecting two planes, with normals n1 and n2 and perpendicular distances to origin d1 and d2. Just stuck on this point - any help appreciated!
Thanks. 128.16.7.149 (talk) 16:33, 14 September 2010 (UTC)
- I assume that a is a vector that you are attempting to find. Then, a⋅n1 = a1n11+a2n12+a3n13... There isn't a lot you can do to reduce it. However many elements there are in the variables is how many unknowns you will have.
- I think you will find more luck looking at eigenvectors. It appears that you want d1 to be an eigenvalue of n1 that will produce the eigenvector a. But, I could be reading this question completely wrong. -- kainaw™ 17:46, 14 September 2010 (UTC)
- What's "an eigenvalue of n1" mean, since is a vector (not a matrix)? --Tardis (talk) 17:55, 14 September 2010 (UTC)
- That is my confusion of the question. Related to your answer below, how can arbitrary length vectors produce the results the questioner is looking for? Either the questioner is misusing the term "vector" or a is not a vector (which makes the "dot" implication of "dot product" confusing) or some other issue that I am not reading correctly. He states that he is looking for a perpendicular vector (distance), which made me think of eigenvectors. -- kainaw™ 18:09, 14 September 2010 (UTC)
- You can always find the dot product of two vectors if they are of the same dimension. Without Emil's addition below, the OP has two equations, and a number of unknowns equal to the dimension of the vectors. Thus, if the vectors are in 2 dimensions, there will probably be a unique solution. In a higher dimensions you will have more variables and thus many solutions. -- Meni Rosenfeld (talk) 18:31, 14 September 2010 (UTC)
- That is my confusion of the question. Related to your answer below, how can arbitrary length vectors produce the results the questioner is looking for? Either the questioner is misusing the term "vector" or a is not a vector (which makes the "dot" implication of "dot product" confusing) or some other issue that I am not reading correctly. He states that he is looking for a perpendicular vector (distance), which made me think of eigenvectors. -- kainaw™ 18:09, 14 September 2010 (UTC)
- By the rowwise interpretation of matrix multiplication, your two equations are equivalent to . Whether that matrix has a unique solution or not depends first on the dimension of your vectors: if they're 3-vectors, you can get only a parameterized family of solutions because the matrix is 2×3 (thus not square, thus not invertible). If they're 2-vectors, then for almost all there is a unique solution . (If they're 1-vectors (scalars), then for almost all there is no solution, but that's probably not the case you had in mind.) --Tardis (talk) 17:55, 14 September 2010 (UTC)
- If I understand it correctly, the OP is talking about 3-dimensional space, and his a is also orthogonal to n1 × n2, which gives the third linear equation needed to make it regular: . The system can then be solved by any method mentioned in the system of linear equations article.—Emil J. 18:26, 14 September 2010 (UTC)
- Another way to state the extra condition is that a is linear combination of n_1 and n_2. If I write , where u_1 and u_2 are scalars, then (using the fact that n_1 and n_2 are normals, which I take to mean they have norm 1) the equation above simplifies to the 2-dimensional system .—Emil J. 18:52, 14 September 2010 (UTC)
- If I understand it correctly, the OP is talking about 3-dimensional space, and his a is also orthogonal to n1 × n2, which gives the third linear equation needed to make it regular: . The system can then be solved by any method mentioned in the system of linear equations article.—Emil J. 18:26, 14 September 2010 (UTC)
Thanks all for your helpful contributions. There are still clearly things I need to learn. The deceptively simple trick here was decomposing the vectors into their components, then simply solving through inverting the matrix (thanks Emil). I think I was distracted by trying to solve the problem using vector operations (dot, cross etc) only. Thanks again. 128.16.7.149 (talk) 09:35, 15 September 2010 (UTC)
Orthogonality of Fourier basis for L2[R]
The Integral transform article says that any basis functions of an integral transform have to be orthogonal, and then says that the basis functions of the Fourier transform are . But these basis functions don't seem to be orthogonal (the inner product of two distinct basis functions, over R, doesn't seem to be 0. I know that you have to take the conjugate of one of the basis functions for inner products in a Hilbert space).
So is the kernel of the Fourier transform actually an orthonormal basis for L2[R]? Is there a different definition for orthogonality when operating over the entire real line? (The basis functions don't seem to be square integrable, for instance).
-- Creidieki 17:06, 14 September 2010 (UTC)
- (Since no one actually familiar with the subject gave an answer, I'll try.) I'm afraid the article is oversimplifying the picture. Fourier transform on R does not directly define an operator from L2 to L2, but from L1 to L∞. The "basis functions" are also only L∞ (moreover, notice that L2 has countable dimension as a Hilbert space, whereas there are continuum many these "basis functions"). In order to get an operator on L2, you take the Fourier transform on , and (uniquely) extend it by continuity to all of L2. This works because is a dense subset of L2, the transform takes -functions to L2, and it is a bounded linear mapping. This kind of construction by extension from a dense subspace works in general for any locally compact abelian group in place of R, see Pontryagin duality.—Emil J. 10:49, 15 September 2010 (UTC)
- The Plancherel theorem is relevant to EmilJ's answer. You said that: "But these basis functions don't seem to be orthogonal (the inner product of two distinct basis functions, over R, doesn't seem to be 0." If the "u" you quote is allowed to range over all integers, then the fact that the basis functions form a (countable) orthonormal set is an easy computation. I think that the mistake you are making is to say that the inner product is defined by integration over the entire real line - it is not. The inner product is defined by integration over the the interval [0,2]. (Of course, when I say "the inner product is defined by integration over" I refer to the "integration" in the following rule for the inner product of two functions:
- where f and g are usually taken to be complex-valued, measurable, -periodic functions. (Equivalently, they are complex, measurable functions defined on the unit circle in the complex plane.).) Hope this helps! PST 23:59, 19 September 2010 (UTC)
Octahedral Symmetry
OK, so the group G of both orientation preserving and orientation reversing symmetries of a regular octahedron has order 48. If we then consider the set D that consists of lines adjoining pairs of opposite vertices (so there will be three such lines) and want to find the stabiliser of one of the lines, do we just bash out the orbit stabiliser theorem and say that the size of the stabiliser of a line is the size of G (48) divided by the size of the orbit (3) giving us 16? And is a question asking you to 'identify the stabiliser' likely to mean just find the order of the stabiliser or something more complex? Thanks asyndeton talk 19:15, 14 September 2010 (UTC)
- They probably want you to actually say which group of order 16 it is. (There aren't that many to choose from.) Rckrone (talk) 03:58, 15 September 2010 (UTC)
September 15
3rd Axiom of Probability
There is a line in the solution to a homework problem that I don't get.
It reads
Noting that and we can use the third axiom of probability to write
and
I don't see how the 3rd axiom is being used to get from one bit to the next, and I can't see where the second bit comes from —Preceding unsigned comment added by 130.102.158.15 (talk) 03:21, 15 September 2010 (UTC)
- You're missing a few equality signs. It should be
- and
- Where you have clearly used the fact that the probability of a union (of disjoint events) is the sum of probabilities. -- Meni Rosenfeld (talk) 06:49, 15 September 2010 (UTC)
- It looks like you're missing a couple of equals signs - , where the first equality is just using the substitution for A. The second equality is then where the 3rd axiom comes in - since the substitution we're using for A breaks it into two mutually exclusive events (the event where both A and B happen and the event where A happens but B doesn't), the probability of their union (i.e. either "A and B both happen" or "A happens but B doesn't") is the sum of their individual probabilities. The expression for P(B) is similar. Confusing Manifestation(Say hi!) 06:50, 15 September 2010 (UTC)
Polynomial Achieving Infimum
Studying analysis the following question came up. Let be a polynomial function of two real variables. Suppose that for all x and y real. Does every such function attain its infimum? Prove or disprove. My intuition tells me that it is true (because you can't have something like a decaying exponential which will always decrease and get arbitrarily close to zero). A polynomial must have some zeros (maybe not real). But I don't have any idea how to even begin. Help? 174.29.63.159 (talk) 04:25, 15 September 2010 (UTC)
- Here's how I would do it, but there may be better ways to approach it. First it should be clear how to show a positive polynomial in one variable achieves it's infimum over R. Consider the all the lines through the origin in R2. On each of these lines L, p can be expressed as a polynomial in one variable, so it attains its infimum bL. Let f: S1 → R map each point x on the circle to bL where L is the line that passes through x. Then show that f achieves its infimum. Rckrone (talk) 05:24, 15 September 2010 (UTC)
- Outside a sufficiently large closed disk, {(x,y) : x2+y2≤R2}, you have p(x,y)>p(0,0). The disk is a compact set and a continuous function on a compact set attains its infimum. Bo Jacoby (talk) 08:33, 15 September 2010 (UTC).
- I don't think the first claim is true. For example consider p=x2+1. I'm also having serious doubts about the strategy I suggested. I thought it would be relatively easy to show that the function f I constructed is continuous, but the more I think about it, the less obvious it is. Rckrone (talk) 12:56, 15 September 2010 (UTC)
- does not attain its infimum. (It took me quite a while to find; on retrospect, it looks like something I would have seen sometime). -- Meni Rosenfeld (talk) 14:15, 16 September 2010 (UTC)
- Ah, the Rabinowitsch trick! How could I miss that (I spent some time trying to fix the arguments above). BTW, also works.—Emil J. 14:36, 16 September 2010 (UTC)
- Wait....can someone explain how the above example(s) work? How does it not attain an infimum? What's happening in the above function? I chucked it into Wolfram Alpha to see what was happening but a 3D plot is none too enlightening... Zunaid 18:31, 16 September 2010 (UTC)
- Oh okay...I worked it out. Choose y as close to zero as you like (so that y^2 term is minimized), then choose x=1/y to make the first term zero. You can therefore make the entire expression as close to zero as you like, however you can't actually attain zero because once you set y=0 the first term automatically jumps back up to 1. Hence the infimum is zero but can never actually be attained. Clever! Zunaid 18:36, 16 September 2010 (UTC)
- Exactly. Also, Alpha can give a better visualization if you help it a bit (note that I interchanged x and y). -- Meni Rosenfeld (talk) 19:35, 16 September 2010 (UTC)
- Oh okay...I worked it out. Choose y as close to zero as you like (so that y^2 term is minimized), then choose x=1/y to make the first term zero. You can therefore make the entire expression as close to zero as you like, however you can't actually attain zero because once you set y=0 the first term automatically jumps back up to 1. Hence the infimum is zero but can never actually be attained. Clever! Zunaid 18:36, 16 September 2010 (UTC)
Box-Muller Transformation
On a completely different note, I am trying to show that the Box-Muller transform takes two independent uniform random variables (from [0,1] each) to two independent standard normal random variables. So when I was trying to invert the transformation, I came across the expression . I have a problem at this point taking the inverse tangent of both points. My reasoning is this. Since U2 can take on any value in (0,1], 2*pi*U2 can take on any value in (0,2*pi] but tangent is not invertible in this domain (as a function). So how/why can we do this? I took the forward Jacobian and I know that it is never zero on (0,1]x(0,1]. So the entire transformation is invertible. But when I was trying to invert it, I couldn't convince myself that this is valid. How does this work? Thanks! 174.29.63.159 (talk) 04:36, 15 September 2010 (UTC)
- Your expression is correct but not sufficient to define the transformation which is (quoting the Box-Muller transform article that uses Z0, Z1 rather than X1,X2)
- Bo Jacoby (talk) 08:17, 15 September 2010 (UTC).
Riemann Zeta Function
On yet another completely different note, I am doing some reading on Riemann and his zeta function and the prime number theorem. I am just curious where (what journals, good papers, books, or websites) I can find some up to date info on this. I don't know much about complex analysis so something that is easy to follow would be appreciated (like the most recent list of the computed zeros or something). I am just wondering what the most cutting edge research is in this field. Maybe a popular book for the general (slightly math oriented) audience? Thanks! 174.29.63.159 (talk) 04:41, 15 September 2010 (UTC)
- How about Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics by John Derbyshire (ISBN 978-0452285255). --Qwfp (talk) 08:41, 15 September 2010 (UTC)
- The last paragraph of the introductory section of the Riemann hypothesis article actually gives a great list of sources. -- Creidieki 10:57, 15 September 2010 (UTC)
Synthetic division
please why the synthetic division can work? the article uses too many terms I have not gotten to in class yet. THank you. —Preceding unsigned comment added by 24.92.78.167 (talk) 23:02, 15 September 2010 (UTC)
- Dividing a polynomial by x-a amounts to multiplying that polynomial by
1/(x-a). Consider expanding 1/(x-a) in a geometric series as follows:
- If you multiply your polynomial by this and consider the coefficients of the largest power of x, the next largest power etc. etc. you'll see the synthetic division method emerge. Count Iblis (talk) 00:26, 16 September 2010 (UTC)
- I am not surprised that you are having difficulty understanding our synthetic division article - it is a very poor explanation of the process. Synthetic division is really quite straightforward. Here is an example. Suppose we want to divide by :
- To match the term we need to start by multiplying by
- This gives us . Subtract this from and we are left with .
- So
- To match the term we now need to multiply by
- This gives us . Subtract this from and we are left with
- So
- To match the term we now need to multiply by
- This gives us . Subtract this from and we are left with
- So, putting all this together, we have
- If you know long division they this should look familiar - it is like long division but without any carries. You can think of it as long division in an "infinite" base.Gandalf61 (talk) 15:22, 16 September 2010 (UTC)
September 16
Semisimple modules completely determined by character trace map?
Hello,
I have been learning quite some representation theory, but I have stumbled upon some problems, since most authors state the results for group algebras over the complex numbers, while I am interested in semisimple algebras and modules in general.
Therefore:
let A be any algebra over any field F, and let M_1,M_2 be semisimple modules over A. Is it true that M_1 and M_2 are isomorphic if and only if the two trace maps from A to F are equal?
I know this is true if $F$ is the field of complex numbers, but I am interested in the most general case.
- How about this? Let A=F=the field of order two, M1=A⊕A (trace always 0) and M2=A⊕A⊕A⊕A (trace also always 0), but M1 and M3 are not isomorphic. The reason for starting with modules over the complex numbers is the theory is much simpler in that case and you at least get Burnside's theorem. There is an entire field of modular representation theory which is more general.--RDBury (talk) 16:29, 16 September 2010 (UTC)
Rado numbers
Consider the integer linear equation where . Supposing it is given that there is a natural number N such that, if {1,2...N} is partitioned in two sets, one of these always contains a solution of the equation. The minimal such N is called the Rado number of the equation. I am looking for general bounds on such N, in the cases where it exists. Where can I possibly find such results. Thanks-Shahab (talk) 11:32, 16 September 2010 (UTC)
Surface Normal
Given that C is closed cure that is the boundary of the triangle T with vertices (1,0,0), (0,1,0) and (0,0,1), I have to directly evaluate, without using Stokes' Theorem, , with , by showing that the normal can be written in the form (1,1,1)dy dz but I don't know how to find the normal. I think I'm just being stupid and should know it but I just can't think of it. Can someone help please? Thanks. asyndeton talk 16:29, 16 September 2010 (UTC)
- The equation of the plane through the three given points is x+y+z=1, so the normal is ∇(x+y+z).--RDBury (talk) 16:33, 16 September 2010 (UTC)
- So simple. Cheers. asyndeton talk 16:36, 16 September 2010 (UTC)
September 17
Probability distribution with kurtosis parameter
Is there a probability distribution which is similar to the normal distribution but with the parameters being mean, standard deviation and kurtosis? Yaris678 (talk) 13:24, 17 September 2010 (UTC)
- Pearson type VII distribution might fit the bill. --Salix (talk): 13:46, 17 September 2010 (UTC)
- Spot on. Thanks! Yaris678 (talk) 18:46, 17 September 2010 (UTC)
Countability
If there exists a bijection between a set X and the natural numbers, making the set X countable, can the same bijection be used to show that any subset of X is countable? Thanks. asyndeton talk 17:35, 17 September 2010 (UTC)
- I assume by countable you mean countably infinite. And my naive answer is no, if u change the domain or range of a function (e.g. by considering a subset of either), then it is no longer the same function. Zunaid 17:43, 17 September 2010 (UTC)
- I think the obvious approach would be to use to the bijection to define functions satisfying the conditions of the Schroeder-Bernstein theorem. AndrewWTaylor (talk) 17:54, 17 September 2010 (UTC)
- The restriction of the original function will define a bijection of the subset and a subset S of natural numbers. Compose it with an increasing enumeration of S to get a bijection with all natural numbers (or initial segment, if the subset is finite).—Emil J. 18:03, 17 September 2010 (UTC)
Let X = {x1, x2, x3, ....} and suppose A is a subset of X. Then let
and let
Keep going as long as A has not been exhausted. Then
Therefore A is either finite or countably infinite. You don't need the Cantor–Bernstein theorem or the like. Michael Hardy (talk) 21:05, 17 September 2010 (UTC)
- It's not even that complicated. The usual definition of a countable set X is that there exists a bijection between X and some subset of the natural numbers. So yes, Asyndeton, the same bijection can be used directly to show that any subset A of X is countable—just restrict the domain of the bijection to A and the codomain to the image of A. —Bkell (talk) 02:58, 18 September 2010 (UTC)
- But if you define "countable" in that way, then you're still faced with the problem of proving that all countably infinite sets have the same cardinality. My way of doing it takes care of that. Michael Hardy (talk) 22:10, 18 September 2010 (UTC)
- That's a separate problem, isn't it? We don't need to prove that in order to establish the claim in the original post, if we assume the usual definition of countable. —Bkell (talk) 22:10, 19 September 2010 (UTC)
- But if you define "countable" in that way, then you're still faced with the problem of proving that all countably infinite sets have the same cardinality. My way of doing it takes care of that. Michael Hardy (talk) 22:10, 18 September 2010 (UTC)
- Thanks for all your answers, especially yours Bkell; unfortunately I'm at a level that demands answers less hi-tech than Schroeder-Bernstein and the like. Restricting the domain and codomain seemed 'obvious' but being 'obvious' does not imply being correct and I wouldn't have thought to specifically mention restriction in my answer. Thanks again. asyndeton talk 12:04, 18 September 2010 (UTC)
A proof that in euclidean space pi is a constant.
I understand of course that in euclidean space (geometry) pi, the ratio between the circumference of a circle and its diameter is a constant. Further, I have been told that in some noneuclidean geometries that the value of pi varies with the size of the circle. I am looking for the proof, which I'm sure exists, that in Euclidean geometry (space) pi is a constant. Your help will be most appreciated. 71.112.10.31 (talk) 18:02, 17 September 2010 (UTC)
- If I was going to construct my own proof, I would start with the fact that Euclidean spaces are vector spaces. Yaris678 (talk) 18:52, 17 September 2010 (UTC)
First try to show that for any two circles in the Euclidean plane, there's a mapping from the space to itself that multiplies all distances by the same constant that takes one of them to the other. (Actually, saying "same" and "constant" is redundant.) Michael Hardy (talk) 21:08, 17 September 2010 (UTC)
- No, saying "same" and "constant" is not redundant; different distances could (in principle) be multiplied by different constants. They aren't, of course, which is the point. --COVIZAPIBETEFOKY (talk) 22:07, 17 September 2010 (UTC)
I think this is true for any normed space, including euclidean space. Define pi as the limit of an infinite sequence, using the Euclidean norm (or any norm - other norms will give a different constant) and assuming you have a circle of radius r. You will find the rs cancel. Yaris678 (talk) 06:16, 18 September 2010 (UTC)
Request for statistics article review
A new editor has written an article Multivariate kernel density estimation and requested feedback. The editor seems to have basic Wikipedia style under control; it needs the review of someone familiar with the field. (I'll also crosspost a couple specific editors.)--SPhilbrickT 21:12, 17 September 2010 (UTC)
- Looks like a very nice article. The main improvement I can think of is that the exposition could use a few more wikilinks. 67.119.14.196 (talk) 05:42, 18 September 2010 (UTC)
Before I get to the main body of the article, I have an issue with the first sentence "is one of the most popular techniques for density estimation". I can only think of three techniques: histograms, parametric methods and kernal methods. Of these, I would suggest that kernal methods are the least popular, partly perhaps because they are the least well-known. I would probably say simply say "is a technique for density estimation". Yaris678 (talk) 06:30, 18 September 2010 (UTC) I have copied these posts to Wikipedia:Requests_for_feedback/2010_September_17#Multivariate_kernel_density_estimation. I suggest any further comments should go there. Yaris678 (talk) 06:52, 18 September 2010 (UTC)
September 18
Finding maximum clique in a largish graph
I have a particular graph with 67,230 vertices and 1,116,605,284 edges, and I'd like to find a maximum clique in it. I realize this is an NP-complete problem. However, maybe there is some algorithm that will run fast enough for me to get an answer for this graph. Does anyone know about the run times of various algorithms for specific sizes of graphs?
On the other hand, for my purposes a "pretty big" clique would probably be good enough. I'm thinking about taking a greedy approach and doing that several times to try to find a "pretty big" clique; since my graph contains 49.4% of the possible edges, it might look similar to the random graph , so a greedy approach might get me within a factor of 2 (based on what I read in the fourth paragraph of Clique problem#Special classes of graphs). Does anyone have any suggestions for this or another approach I should take here? Otherwise I'm probably just going to try something and see how well I can do. —Bkell (talk) 01:25, 18 September 2010 (UTC)
- I have no idea if anything here might be useful. 67.119.14.196 (talk) 22:32, 19 September 2010 (UTC)
Placing equally spaced vertices in a curve (but not based on arc length)
I'm trying to find the name of this problem so I can read the existing literature on the subject, but I'm not finding anything.
I have a real-valued, continuous function . For a given value , I want to find a value so that the distance from the point to the point equals .
Does this sound familiar to you guys? How can I read more on these kinds of problems? Could you point me to the right direction on which methods I could use to solve it? Approximations are fine.
- If x is a fixed parameter, then can't you just solve for a with your favorite numerical root finding method? 67.119.14.196 (talk) 05:47, 18 September 2010 (UTC)
- It's what I'm doing. Just wondering if there's something better out there.
- You're trying to find the intersection of a circle with an arbitrary curve. If that's all you have to go on, you're left with general methods. 67.119.14.196 (talk) 16:10, 18 September 2010 (UTC)
- It's what I'm doing. Just wondering if there's something better out there.
- The existing literature on the subject would switch variable names such that a is the constant and x is the unknown. Bo Jacoby (talk) 06:35, 18 September 2010 (UTC).
Casimir Operator
I am wondering how to prove that the casimir operator acts as a scalar matrix if the ground field is not algebraically closed. I tried a math forum. it should be either obvious or wrong. In weibel's homological algebra, the result is stated as if it was obvious. --80.99.46.164 (talk) 08:05, 18 September 2010 (UTC)
Inequality question- For which values of k does the inequality hold for all real values of x?
Here is the equation [1] I have tried to solve it, but can only arrive at or . I have told that it is wrong. Are there any other conditions for the inequality to hold for all real values of x? —Preceding unsigned comment added by Invisiblebug590 (talk • contribs) 08:17, 18 September 2010 (UTC)
- First note that k must be positive, as otherwise the expression will be negative for very large x. Also note that the denominator must have no roots, as otherwise the expression is undefined for certain x. Finally, the numerator must have no roots, as otherwise the expression is 0 for certain x. Those conditions should suffice.--203.97.79.114 (talk) 09:02, 18 September 2010 (UTC)
Every real/complex algebra, generated by real-symmetric matrices is semisimple?
Hello,
I have yet another question about semisimple algebras.
Every algebra (real or complex, not really clear to me if both hold) generated by real-symmetric matrices is semisimple.
I have seen this being used in many articles, without any explanation or reference. Is this true and if so, which theorem is being relied on?
- Let A be an algebra over the reals generated by real symmetric matrices. Note that if a is in A, then so is the transpose of a (use the fact that (ab)^T = b^T a^T, where the T denotes transpose). Now suppose A has a non-zero radical, so it has a non-zero nilpotent ideal I. Let x be a non-zero element of this ideal, so y= x.x^T is also in I, and is symmetric and non-zero (to show y is non-zero use the fact that if a linear map has matrix x with respect to an orthonormal basis, then its adjoint has matrix x^T). But no non-zero symmetric matrix can be nilpotent, since it is diagonalisable. I learned this argument from a paper by Eugene Gutkin in Phys. Let. A Tinfoilcat (talk) 19:05, 18 September 2010 (UTC)
- If you see this used in an article without explanation or references the please add a {{fact}} or similar tag. I think it WP standards were being followed more then this question would not need to be asked.--RDBury (talk) 01:00, 19 September 2010 (UTC)
- Thank you both. However, I still have some questions and remarks. First of all, I meant "articles in journals" (not Wikipedia-articles, sorry for the confusion). Secondly, does that argument also work for algebras generated by realsymmetric matrices over the complex numbers. Finally, I do think there might be a problem in Semisimple algebra because there it is claimed that the radical is the unique nilpotent ideal containing all nilpotent elements of the algebra, and that finitedimensional algebras are semisimple iff the radical is trivial. But this would imply that a matrix ring over a field is not semisimple (as there are nontrivial nilpotent elements) while that ring is even simple.Evilbu (talk)
- The article semisimple algebra is wrong, for exactly the reason you mention, probably due to some confusion with the commutative case. I fixed it. The argument above for semisimplicity actually appears in that article. 129.67.37.143 (talk) 13:30, 19 September 2010 (UTC)
American Mathematicians
why is it that you never hear of (white, Western European-descent) American mathematicians ? Wikipedia will attest that there are many (there's a list somewhere) but when speaking of solving unsolved problems the persons involved will always be Chinese (or Indian, less commonly other Asian) or Eastern Europeans (I'm generalizing a bit, but this is most often the case) If they are American, then they'll be either immigrants from those nations or be first generations from those nations 24.92.78.167 (talk) 15:52, 18 September 2010 (UTC)
- Looking at the table in Fields medal, I see about 25% of the medalists are listed next to US flags. 67.119.14.196 (talk) 16:22, 18 September 2010 (UTC)
- Of those, the vast majority have Western-European sounding surnames. What is the issue here? Are you just annoyed because you can never remember how to pronounce Chebyshev's inequality? Yaris678 (talk) 18:13, 18 September 2010 (UTC)
- Hear of where? I think you'll find that amongst the general public the most famous mathematician in the world is that guy on Numb3rs.--RDBury (talk) 01:05, 19 September 2010 (UTC)
I hear of, and meet, American mathematicians incessantly. Just a moment ago I edited the article about William Thurston. Michael Hardy (talk) 16:44, 19 September 2010 (UTC)
There are a lot of Ph.D. students and post docs from Asia and Russia in the US (also in Europe). During lunch time, it often happens that you're sitting at a table where lively discussions are going on in Chinese. Count Iblis (talk) 15:47, 19 September 2010 (UTC)
Index of a subgroup
Let m be a natural number and p a prime. Consider the multiplicative (mod p) group , and its subgroup . What will be the index of this subgroup? I have the answer in my book as gcd(m,p-1) but I have no idea how this comes about. Thanks - Shahab (talk) 16:19, 18 September 2010 (UTC)
- Do you know about Lagrange's theorem? How many elements does have? What does the subgroup generated by xm look like? 67.119.14.196 (talk) 22:17, 18 September 2010 (UTC)
- I know of Lagranges theorem. is cyclic and has p-1 elements, and for a fixed x, the subgroup generated by xm is of the form {xmk:k is an integer}. Its order will be O(x)/gcd(m,p-1) whch divides p-1. Now what should I do? If x generates then by elementary number theory, the index of such a subgroup is (m,p-1). But that doesnt answer my question-Shahab (talk) 03:03, 19 September 2010 (UTC)
- Suppose g is a generator of . Then, as you say, the subgroup generated by gm is
- So . But for each there is an integer ax such that , so
- So . Put this all together and you have H = G. Gandalf61 (talk) 10:09, 19 September 2010 (UTC)
- Thanks. Regards-Shahab (talk) 15:48, 19 September 2010 (UTC)
- Suppose g is a generator of . Then, as you say, the subgroup generated by gm is
- I know of Lagranges theorem. is cyclic and has p-1 elements, and for a fixed x, the subgroup generated by xm is of the form {xmk:k is an integer}. Its order will be O(x)/gcd(m,p-1) whch divides p-1. Now what should I do? If x generates then by elementary number theory, the index of such a subgroup is (m,p-1). But that doesnt answer my question-Shahab (talk) 03:03, 19 September 2010 (UTC)
Base and mod, 2 qs
Is it correct that the value of x (mod n) is the same as the final (rightmost) digit in base n? And what field of study would the study of bases fall in? Could you have a base with a variable in it? —Preceding unsigned comment added by 24.92.78.167 (talk) 16:24, 18 September 2010 (UTC)
- The answer to the first question is false, like 15 is 0 mod 3, while its right most digit 5 is 2 mod 3. It is true for n=2 though.-Shahab (talk) 16:47, 18 September 2010 (UTC)
- [ec] 15 is in base 3, and its rightmost digit is 0, which is 15 mod 3. And likewise for any other nonnegative number, so the answer to the first question is yes. You may find more information in Numeral system. -- Meni Rosenfeld (talk) 17:45, 18 September 2010 (UTC)
I mean the rightmost digit of the number x written in base n. For example 15 in base 3 is 120 and the right most digit 0 is congruent to 15 mod 3 —Preceding unsigned comment added by 24.92.78.167 (talk) 17:44, 18 September 2010 (UTC)
- Oh, I misunderstood :).-Shahab (talk) 17:54, 18 September 2010 (UTC)
Also note that logn x is related to the number of digits it takes to represent x in base n. Taking the example above of 15 = 1203, log3 15 = log 15 / log 3 ≃ 2.46, so it takes three digits (120) to represent 15 in base 3. In general, it takes ⌊logn x⌋ + 1 digits to represent x in base n. See Logarithm#Uses and occurrences. -- 124.157.249.242 (talk) 08:06, 19 September 2010 (UTC)
Higher Dimensional Results
There's a theorem that says that given a continuous map ƒ from the sphere to the plane, there will always be two antipodal points, say p and q, such that ƒ(p) = ƒ(q). The proof employs the fundamental group. Does anyone know of any higher dimensional results? Take a continuous map from a hypersphere to a hyperplane. Are there always antipodal points, maybe even submanifolds of antipodal points, with the same property? — Fly by Night (talk) 17:37, 18 September 2010 (UTC)
- The Borsuk–Ulam_theorem states that for a continuous map from the n-sphere to the Euclidean n-space there is a pair of antipodal points that map to the same point. Invrnc (talk) 23:14, 18 September 2010 (UTC)
- Thanks for that. Any idea about stronger results, i.e. submanifolds of antipodal points, with the same property? — Fly by Night (talk) 22:43, 19 September 2010 (UTC)
Rado's theorem doubt
In Rado's theorem (Ramsey theory) it says in the last line: ...can be written as a linear combination.... Does it mean integer linear combinations or rational linear combinations? If the former, how does the special case of Rado's single equation theorem follow from the general theorem. - Shahab (talk) 19:12, 18 September 2010 (UTC)
- Okay. Found the answer and updated the page.-Shahab (talk) 03:14, 19 September 2010 (UTC)
1-Factors on a bipartite graph
Hi all,
I'm doing a Graph Theory problems sheet and I've come up to a question on calculating a perfect matching (a.k.a 1-factor) on an n x n bipartite graph (left-hand set of vertices 'P', right-hand side 'Q', say): it defines an 'X-set' as a set of vertices in P such that |Γ(P)|<|P|, where Γ(P) is the neighborhood of P - i.e. a set which violates Hall's theorem, showing that there is no perfect matching.
We're studying ways to check whether a perfect matching exists, and one of the questions asks 'why would it be a bad method to check whether every subset of P is an X-set'? I assume by this it means once we find one X-set we stop looking, otherwise it seems incredibly obvious that the method is bad because you only need to find one such X-set and you're done, but that seems like a pretty elementary question on the nature of counterexamples (i.e. you only need one) - so assuming it means why is it bad to check every subset of P for an X-set, can anyone suggest what they might be looking for?
The only things I can come up with are very wishy-washy non-answers such as 'it may be more efficient to use an algorithm to obtain an X-set or a perfect matching', 'probabilistically speaking it's unlikely that any arbitrarily chosen subset is an X-set' etc (and I don't even think the latter is necessarily true) - so could anyone suggest what you think they might be looking for? Or do you think they are actually just pointing out the obvious that you only need to look until you find your first blocking set or check every set to show it isn't a blocking set, which seems a bit too easy to me? Maybe they're looking for me to point out that if you can construct one perfect matching, that's more efficient than showing that every single subset is not an X-set?
Ta in advance for any help :) Estrenostre (talk) 22:11, 18 September 2010 (UTC)
- I suspect you've got it. Even if you think the answer is a non-answer and wishy-washy, an algorithm that checks whether every set is an X-set, even if it terminates after finding one, is exceedingly inefficient (exponential in running time). Since you can find a perfect matching much more efficiently, it is better to find your perfect matching, and then use that as your evidence that there is no X-set. Invrnc (talk) 23:26, 18 September 2010 (UTC)
September 19
Expression of a rational number in powers of p
Let p be a fixed prime. Let x be a non zero rational number. Express x as: and gcd(A,B)=1, and p dividing neither A nor B. Let A/B = a (mod p). Now my book says that x can be expressed as + higher order terms. I see that in the case when x is an integer, but fail to understand how it follows for rationals. Can someone explain this please. Thanks-Shahab (talk) 15:45, 19 September 2010 (UTC)
- Higher order terms in what ? Does this mean higher powers of p ? And do the coefficients have to be integers in the range [0, p − 1], like a is ? If so, it sounds as if your book is describing the p-adic expansion of x. Gandalf61 (talk) 19:54, 19 September 2010 (UTC)
Is every permutation group decomposable into a group direct product of cyclic subgroups
The fundamental theorem of finite abelian groups states that every finite abelian group can be expressed as the direct sum of cyclic subgroups of prime-power order. However, this theorem does not apply to permutation groups, since they are not abelian. Is it true that every permutation group is decomposable into a group direct product of cyclic subgroups (of any order)? Hints to a proof or a counter-example would be great. Thanks! --Masatran (talk) 15:52, 19 September 2010 (UTC)
- No. Any direct product of cyclic groups is abelian. Algebraist 15:53, 19 September 2010 (UTC)
Thanks for the answer. I found the Jordan-Hölder theorem, which applies in certain cases. --Masatran (talk) 19:56, 19 September 2010 (UTC)
- It may be worthwhile to mention the related semi-direct product construction. While no permutation group on more than 2 letters can be the direct product of cyclic (or even abelian) groups (as Algebraist explains), in low-order cases, they do occur as semi-direct products of "well-behaved groups". For example, S3, the permutation group on three letters, is the semidirect product of the cyclic group of order 2 acting on the cyclic group of order 3. Similarly, S4, the permutation group on four letters, is the semi-direct product of a cyclic group of order 2 acting on the alternating group on 4 letters. This generalizes: the permutation group on n letters is the semidirect product of the cyclic group of order 2 acting on the alternating group on n letters, for all .
- In fact, since Sn contains only one non-trivial proper normal subgroup for (the alternating group on n letters, An), for , one cannot write Sn as a non-trivial semidirect product in a way distinct (up to isomorphism of the groups involved) to the one described in the previous paragraph. (Note that An is a simple group for all .)
- There are other useful constructions in group theory other than the direct product and the semidirect product - for example, the wreath product is a particularly interesting example. To give one interesting application of the wreath product construction (in case you are interested), I quote the following remarkable theorem due to Yoshida:
“ | Let G be a finite group, let P be a Sylow p-subgroup of G, and let N denote NG(P). (Here N denotes the normalizer of P in G.) Then N controls p-transfer in G unless P has a homomorphic image isomorphic to the wreath product of two cyclic groups of prime order. | ” |
- It is often useful to know when p-transfer is controlled and this is the reason Yoshida's theorem is so powerful. Hope that helps! PST 23:41, 19 September 2010 (UTC)
braids, knots, and links
I'm thinking about a simple braid with three strands. The leftmost strand crosses over the middle strand, then the rightmost strand crosses over what is now the middle strand, then repeat,.... and repeat,.... and repeat....
The length of the braid is the number of such crossings. Now take the end of the braid and glue it strand-by-strand to the beginning of the braid. One end of a strand may get glued to its other end, or may get glued to the other end of a different strand; this depends on the number of crossings. The total number of strands after gluing—I'll call them "components"—is the number of cycles of the corresponding permutation of three elements. Each component will be either knotted or unknotted, and the several components will be either linked or unlinked, both depending on the number of crossings. And the particular type of knot or link will depend on the number of crossings. So I drew some crude—but I hope accurate—illustrations, and I get this:
- 0 crossings, 3 components, unlinked, unknotted.
- 1 crossing, 2 components, unlinked, unknotted.
- 2 crossings, 1 component, unknotted.
- 3 crossings, 2 components, linked, unknotted.
- 4 crossings, 1 component, knotted.
- 5 crossings, 2 components, linked, unknotted.
- 6 crossings, 3 components, Borromean links, unknotted.
I haven't (yet) gone beyond 6. The number of components must clearly be a periodic function of the number of crossings, but I don't know about the knotted or linked status or the types of knots or links; I'm guessing these could grow in complexity. Only in the case of 4 crossings did I see a knot, and it's not a simple overhand knot, but it's one I think I've seen a picture of within Wikipedia. The cases with 3 and 5 crossings gave identical results—in particular, the links with the simplest possible. This suggests you should be able to rearrange the strands from one of these cases to the other without cutting any strands.
What is known about this problem? Michael Hardy (talk) 18:29, 19 September 2010 (UTC)
- ...and now I see we have a (short) section on closed braids. So probably someone's thought this through. Michael Hardy (talk) 18:34, 19 September 2010 (UTC)
- You may be interested in Wikipedia:WikiProject Knots. -- Wavelength (talk) 19:15, 19 September 2010 (UTC)
- Braid group is the most appropriate article, you seem to be looking at the braid group on three strings B3 which is isomorphic to the knot group of the trefoil knot – in particular, it is an infinite non-abelian group. This group has the presentation or . --Salix (talk): 19:46, 19 September 2010 (UTC)
- The knot for 4 crossings is the figure-eight knot if that is helpful. Rckrone (talk) 19:56, 19 September 2010 (UTC)
Definitely the figure-eight knot matches the picture I drew. I said 3 and 5 are the same link, but I now think I should have said those are different but 5 and 7 are the same link. Michael Hardy (talk) 22:12, 19 September 2010 (UTC)