Talk:White noise
|
White noise (slang) was nominated for deletion. The discussion was closed on 1 June 2022 with a consensus to merge. Its contents were merged into White noise. The original page is now a redirect to this page. For the contribution history and old versions of the redirected article, please see its history; for its talk page, see here. |
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||
|
This article links to one or more target anchors that no longer exist.
Please help fix the broken anchors. You can remove this template after fixing the problems. | Reporting errors |
Fourier Transform of Gaussian white noise.
[edit]I've seen many assertions essentially stating that the Fourier transform of Gaussian white noise is also Gaussian white noise, but no proofs, and no specific definition of what that would mean in the complex case. I understand the explanation given for Gaussian white noise vectors x: 1) the probability density function of x is radially symmetric; 2) the Discrete Fourier Transform is an orthogonal transformation (actually unitary, and it has to be normalized) W so the density function of X=WHx is also radially symmetric.
However, X is now a complex random vector. What is the appropriate translation of a density function in this case? Do we expect the real and imaginary parts to be Gaussian, or the norm of the components of X? This gets even harder to decipher when looking at a continuous Gaussian white noise process X_t.
Justin Mauger (talk) 04:22, 25 May 2019 (UTC)
The amplitude of the Fourier transform of Gaussian white noise is constant, the phase is not Gaussian, it is uniformly distributed random between 0 and 2 pi. Greglocock (talk) 07:05, 26 May 2019 (UTC)
Continuous-time white noise is not mathematically correct
[edit]I just wanted to note that the definition of a continuous-time white noise process as given here is not mathematically correct and that the sentence "it is a zero mean process for all time and has infinite power at zero time shift since its autocorrelation function is the Dirac delta function" isn't either. One cannot define such a process whose trajectories wouldn't have any form of regularity (including not being bounded on any interval). The correct way of writing white noise in continuous time is through the stochastic integral with white noise replaced by increments of the Wiener process (or Brownian motion). On the other hand this definition is OK for discrete time indices and then the power (second order moment at lag zero) is finite. —Preceding unsigned comment added by 82.234.174.90 (talk) 21:30, 28 February 2011 (UTC)
- The version done with the tempered distributions looks correct.
- But the first part of the section Continuous-time white noise is really a steaming pile of half-baked concepts. I tried to put in a few warning labels but I don't have time to do the whole thing.2A02:1210:2642:4A00:85A5:3302:64C8:5866 (talk) 16:42, 28 October 2023 (UTC)
- By the way, it would be useful to add the concept of a Gaussian white noise defined on any measure space (where the index variable of the white noise lives). See Le Gall, Brownian Motion, Martingales, and Stochastic Calculus, p.11, section 1.4, Gaussian White Noise. 2A02:1210:2642:4A00:85A5:3302:64C8:5866 (talk) 16:42, 28 October 2023 (UTC)
Untitled comments
[edit]Need some clarification on this statement:
"I.e., it is a zero mean process for all time and has infinite power at zero time shift since its autocorrelation function is the Dirac delta function."
Shouldn't it be as follows:
"I.e., it is a zero-mean process for all time and has infinite average noise power since its autocorrelation function is the Dirac delta function at zero time shift"?
Agree. White noise is not necessarily gaussian, and gaussian noise is not necessarily white. I found this article and discussion extremely useful. However, I would like to suggest that some of the experts around here edit the article on Additive White Gaussian Noise since it is noise that is both white AND gaussian. I feel the AWGN article is very much lacking on a lot of things. User:Daniel.kho
White noise does not necessarily have a normal distribution (if generated by a random number generator, it's uniform or has two equal spikes), nor is noise with a normal distribution necessarily white (normal white noise passed through a pink filter becomes normal pink noise). -phma
I find the following paragraph from the original article a little unclear:
"It is often incorrectly assumed that Gaussian noise (see normal distribution) is White Noise. The two properties are not related. However, Gaussian White Noise is often specified; such a signal has the useful statistical property that it's values are independent in time."
Does the "Gaussian noise" referred to mean Gaussian in the frequency spectrum, thus not white because by definition white noise has flat frequency spectrum?
Is not "Gaussian white noise" a random signal which (a) has a Gaussian probability density function (in time), and (b) is uncorrelated in time (thus white because it has a flat frequency spectrum)?
What is meant by "independent in time"? Is this the same as uncorrelated in time? Uncorrelated in time -> random, but does NOT mean/imply Gaussian. -drd
Fixed up the math a bit, does this still need to remain flagged for peer review to address the rest of these questions? --carlb 14:04, 13 Oct 2004 (UTC)
It would be nice if someone who is familiar with the concept answers the questions above. They are not covered explicitly in textbooks. The article alludes to whether the Gaussian distribution is in the time-domain, but is not explicit. --vlado4 21:51, 12 November 2005 (UTC)
Can someone please discuss what "colored noise" is! I googled the term and it provided only vague results... Please, Please, Please,Please, Please, Please, Please! Ved 01:33, 23 January 2006 (UTC)TRUPET
- Are you after Colors of noise? -Splashtalk 01:43, 23 January 2006 (UTC)
Gaussian noise
[edit]Indeed, for Gaussian noise a value at any one time-point will come from a Gaussian distribution. This does not say anything about what happens if you pick two values from adjacent time-points -- if the two values from adjacent time-points are always similar then you have pink noise, and if they are uncorrelated you have white noise. Both can be Gaussian noise, however. Rnt20 08:47, 10 Mar 2005 (UTC)
Simulating a random vector
[edit]Suppose that a random vector has covariance matrix . Since this matrix is Hermitian symmetric and positive semidefinite, by the spectral theorem from linear algebra, we can diagonalize or factor the matrix in the following way.
What is the Lambda? A Definition of the Lambda is missing!
- It's defined in the next sentence. PAR 14:59, 18 October 2005 (UTC)
Is it required that the matrix is positive semidefinite? —Preceding unsigned comment added by 195.189.193.1 (talk) 06:50, 3 March 2011 (UTC)
Sleep Aid or Sleep Deprivation?
[edit]The article states that it's used in torture, and the torture article backs it up, but isn't it quite well-known that a lot of people need white noise to sleep, as it blocks out other noises and is very easy to block out?
It is unikely that anyone uses white noise to sleep, while it is in a mathematical sense "natural", it is definitely not something humans have evolved to find pleasant or easyto block out because it has all those frequencies which aggitate us to alertness (as well as all other frequencies of course) and frankly hurt our ears. This makes sense when you think about the fact that such a purely stochastic process which would generate white noise would be rare in nature and therefore we wouldn't have evolved in a way that would make us tolerant of it--but have evolved to be agitated by several of the frequencies which it contains. Brown noise would be a much more likely canidate for that, as it contains the frequencies which we are most used to hearing (because its the frequencies that wind, water, and rustling vegetation tend to generate). If you listen to white noise you'll see what I mean,it is very grating, whereas brown noise can be soothing. --Brentt 04:38, 13 January 2006 (UTC)
Brentt, I personally use white noise to sleep. I helps very much as it's the noise covering all frequencies, with little structure, and this blocks out any disruptive sounds. — Preceding unsigned comment added by 5.151.1.18 (talk) 22:03, 4 February 2017 (UTC)
Applications
[edit]The article says that white noise is used because: it cuts through traffic noise and doesn't echo. All sound echos, its just that the echo mixes in so you can't distinguish it.
And white noise is used in sirens because of the range of frequencies. The human mind is better able to distinguish the direction of multi-frequency sound over the standard mono tone sirens.
It is the simple volume and uniqueness of a siren that makes it cut through traffic noise. When white noise is used in a siren it is usually a part of or underlying the sound of the usual mono-tone as it is actualy harder to distinguish from traffic noise. --Stripy42 19:17, 8 May 2006 (UTC)
Figures
[edit]I feel it might make sense for plots which show white noise 'zoomed in' on to use a 'staircase' view or be generated using the stem() plot function in matlab/octave. As it is, we see linear interpolation between samples; in fact the data shown is no longer white noise, at least not at the sampling frequency implied by the interpolated values. -- Oarih 03:55, 25 September 2006 (UTC)
- Agreed. — Omegatron 06:05, 25 September 2006 (UTC)
Confusing graphic?
[edit]I find this graphic a little confusing:
As far as I understand it the graph should be flat, and yet it clearly increases slightly with frequency. I think this needs explaining, or the graphic needs fixing.
removing image
[edit]I am removing the image shown here.
It adds nothing to the other image. The main problem with it is that because the data points are quite spaced but the image does not explictly indicate the data points, the lines joining the points appear like correlated data, making the noise appear red rather than white. — Alan✉ 05:33, 1 August 2007 (UTC)
White Noise Not Analogous to White Light
[edit]I recommend deleting "White noise is considered analogous to white light which contains all frequencies." White light contains all visible frequencies - a band-limited response of the human eye - whereas white noise contains all frequencies from minus infinity to plus infinity on the frequency scale (i.e., the Fourier transform of a Dirac delta function).
--FP Eblen 17:39, 3 August 2007 (UTC)
- It's an analogy, not an identity. Were it to identify a theoretical construct with a physical phenomenon, it would be false, but it merely analogies, and usefully, seems to me. Nobody's got infinite bandwidth. Not EM or sound or noise. It follows that nobody's completely white, just as no real curve is a circle and no real coin has a 0.5 probability of coming up heads, but these theoretical constructs are as useful as the present analogy. Jim.henderson 17:55, 3 August 2007 (UTC)
- White noise is distinctly an abstraction, not a real physical entity. It, by definition, has infinite bandwidth. White light is a real physical entity strictly band-limited to the visible spectrum. It is misleading and inappropriate to consider them (even) analogous. If one were describing the concept of a point, it would not be appropriate to say it is analogous to a very small circle, since a point has exactly zero width and a very small circle does not. Same with white noise – its most distinguishing trait is its infinite bandwidth.
- This concept is useful and important when considering theoretical systems with widely varying input bandwidths, for example. If white noise is (theoretically) present at the input to the system, one can claim that no matter how wide the input bandwidth gets, the white noise will always fill it. This would not be true if the input was white light.
- What makes white noise white and the bandwidth infinite is the fact that its autocorrelation function has non-zero value only for exactly zero shift. For any other value of shift, the autocorrelation function is exactly zero, i.e., it is an impulse function (which, by definition, has exactly zero width). The spectrum of white noise is the Fourier transform of the autocorrelation function, which, being an impulse, makes the spectrum infinite.
- Please realize that these are facts which can be corroborated by any textbook describing white noise.
--FP Eblen 22:05, 3 August 2007 (UTC)
- Please realize that these are facts which can be corroborated by any textbook describing white noise.
- Umm, infinite bandwidth is hardly a unique feature of white noise or vice versa. See square and triangular waves, for two examples of the former. Yes, you're right to point out that the relationship between white noise and white light is not an identity. Analogy is not identity, and the current text correctly offers analogy and not identity. Anyway, what alternative text do you offer, in not too many additional words, to make the point (and the meaning of "white" in the context) clearer to readers who do not already understand Fourier transforms, autocorrelations, etc? Jim.henderson 01:48, 4 August 2007 (UTC)
- Worse, what we think of as "white" light (e.g., sunlight) is really blackbody radiation, which has a distinct intensity peak in the spectrum according to Planck's law. 155.212.242.34 (talk) 14:16, 7 December 2007 (UTC)
- That just means sunlight isn't truly white in the strictest sense. Like many technical terms, "white" has multiple, related meanings. It's ordinary meaning refers to an appearance as perceived by the human eye. This is abstracted to a theoretical source whose power distribution is uniform over all frequencies. Such a source is, of course, a physical impossibility since it would have infinite total power. Practical white noise sources can only approximate whiteness (i.e. uniformity) over some frequency range. In that sense a white noise source and white light are exactly analogous.--agr (talk) 14:46, 7 December 2007 (UTC)
- Saying they are roughly analogous maybe okay but not "exactly analogous." The important distinction being that the term "white light" has no implications of randomness whereas "white noise" is white because of its qualities of randomness. Remember the key is that a white noise signal is 100% uncorrelated to itself for any degree of time shift greater than zero, i.e., if one compares a white noise signal to a copy of itself, the correlation is 100% when they exactly overlap in time, but if the copy is shifted an infinitesimal amount (anything greater than zero), the correlation is exactly zero. It is this fact that makes it white since this is what makes the signal spectrum flat and infinite. If the correlation were non-zero for any value of shift greater than zero, the spectrum would not be flat or infinite.FP Eblen (talk) 07:27, 12 May 2008 (UTC)
The problem I'm having with this is that white light does not necessarily contain "all frequencies". You can make white light using a blue and yellow laser. On the other hand a unity power distribution will be bluish due to the spectral sensitivity of the eyes. I suggest using
- White noise is considered analogous to white light which may contain many frequencies
to remove the misunderstanding. --Thorseth (talk) 07:32, 26 August 2008 (UTC)
- Wouldn't it be fairer to say that "you can make what appears to a human eye as white light using a blue and yellow laser"? Just because the human eye misinterprets particular combinations of monochromatic light for spectrally-white light, it doesn't mean that they're physically the same! From a pure physics point of view, white noise is similar (spectrally) to true white light. Oli Filth(talk|contribs) 08:54, 26 August 2008 (UTC)
- My point was also that "true white light" by which I presume you mean an flat power distribution is not white, but bluish due to the fact that human eyes are more sensitive in the short wavelengths. So the analogue is only valid in the sense that white light is a mix of frequencies and so is white noise. The color white is a sensation that can arise from an infinite number of wavelength combination. Therefore I don't think you can use the word misinterpret. Because with colors there are only interpretations, and no true colors, strictly speaking.
- Wouldn't it be fairer to say that "you can make what appears to a human eye as white light using a blue and yellow laser"? Just because the human eye misinterprets particular combinations of monochromatic light for spectrally-white light, it doesn't mean that they're physically the same! From a pure physics point of view, white noise is similar (spectrally) to true white light. Oli Filth(talk|contribs) 08:54, 26 August 2008 (UTC)
- My concern was that the notion of "white"="all colors" is being spread when it needs to be corrected. The etymological analogue should still be there but it shouldn't use a false definition of white light.--Thorseth (talk) 10:57, 26 August 2008 (UTC)
This statement too was deleted because it is incorrect:
- [white light's spectrum is flat] in such a way that the eye's three color receptors (cones) are approximately equally stimulated.
Actually the spectrum of light that we consider "white" (including sunlight at noon) is usually not flat, and the three color receptors are not equally stimulated by it, even if it had a flat spectrum. "White" cannot be physically defined, because eye and brain will adjust their processing of the light signals captured by the pigments until the objects that should be white are perceived as white. The term "white noise" is only a very loose metaphor on a commonly misunderstood concept; let's leave it at that. --Jorge Stolfi (talk) 06:01, 12 February 2013 (UTC)
removing 'Feral' white noise link
[edit]I'm removing the link to www.luketan.com simply because the so-called 'feral' white noise file is not true white noise. You can see the difference between the two here on this image I created. Binksternet 01:15, 25 September 2007 (UTC)
- I don't have any invested interest in this particular link, but "white noise" is a colloquial as well as scientific term, meaning noise with a very wide bandwidth that is useful in masking other noise, so I would think this page could accommodate the scientific as well as nonscientific content. I put a link in today for those looking for a source of white noise for masking office noise (my need today). In fact this page is part of sound production technology, and in that context, I would guess the feral stream would be labeled as a "white noise" by most people. If you object, since it's missing mostly high frequencies, maybe just call it pink or nearly white noise. Hess8 16:54, 1 October 2007 (UTC)
- I feel the "feral white noise" link would be appropriate at White noise machine and Sound masking but that this page should focus on purely random white noise that has equal power at each frequency. It's enough that White noise machine and Sound masking are mentioned as External links. Binksternet (talk) 17:19, 7 December 2007 (UTC)
WikiProject class rating
[edit]This article was automatically assessed because at least one WikiProject had rated the article as start, and the rating on other projects was brought up to start class. BetacommandBot 10:05, 10 November 2007 (UTC)
what is N0?!!!
[edit]In "White random process (white noise)", the formula references "N0", but gives no definition whatsoever as to what this quantity is supposed to be...65.183.135.231 (talk) 21:24, 4 May 2008 (UTC)
White noise as vehicle back-up alarm
[edit]I took out an external link from a commercial concern which proposes to replace mid-frequency beeping warning tones with white noise. I don't think this external link is useful by itself, though if the concept were developed further as article text it could serve as a reference. Binksternet (talk) 03:10, 18 December 2008 (UTC)
- Good job. Daniel Christensen (talk) 19:35, 2 November 2009 (UTC)
power spectrum image flat on log scale == Pink ?
[edit]Despite the name of that image, a flat power spectrum on a logarithmic scale is pink noise, not white. I have used that image in a couple articles (colors of noise, pink noise) to illustrate pink noise, but it cannot be used for white noise. If anyone has a correct spectrum, that would be appreciated. Baccyak4H (Yak!) 18:35, 25 February 2009 (UTC)
- Update User:Heron fixed this image. Thanks. Baccyak4H (Yak!) 19:31, 25 February 2009 (UTC)
- It's not the scaling of the x axis that makes the difference, it is the method by which the energy is binned. Pink noise in an fft will still fall with freq however the x axis is scaled. I think I got it wrong again in the article. Either way you are confusing things, please stopGreglocock (talk) 01:24, 26 February 2009 (UTC)
- When pink noise makes for a flat graphic image, such in the live sound application called Smaart or SmaartLive, it is because the data are collected in such a way so as to make that be true. A white noise graph is naturally flat. Binksternet (talk) 03:07, 26 February 2009 (UTC)
(OD) I think I understand the confusion, including my own. Let's ask, what do we wish to demonstrate with such a plot? If it is to be able to quantify the power, e.g., be able to show which has more power, between freqs A and B or between freqs X and Y, then that would simply be an integral, the area under the spectrum. That would depend on the horizontal scaling. In my field this is analogous to how probability density functions change when the scale of the random variable is tranformed. I'll leave it up to others to figure out the best way to handle these plots, if they should reflect this integrable power, or if the log scale is just a visual trick to easily see the whole aural range, as the technical wording is probably beyond me. I would suggest that the convention, whatever it may come to be, be consistent across all noise color articles. Baccyak4H (Yak!) 03:41, 26 February 2009 (UTC)
- It is not a matter of wiki-convention, it is a matter of definition. Stretching or compressing the x axis by taking its log does NOT change the vertical scale. If you analyse white noise with a thrid octave filter (for example) you will get a spectrum that rises at +3dB per octave. If you analyse it with FFT then it will be pretty flat. If you analyse pink noise with 1/3 octaves you'll get a flat spectrum, whereas the fft spectrum will fall at higher frequencies. Greglocock (talk) 04:34, 26 February 2009 (UTC)
- It seems we're talking past each other. We can pick either definition you state, or perhaps even others (if they exist), but in so choosing have now implicitly decided on a convention to use that one. Hair-splitting aside, the important thing as I see it is to be consistent. As of now, it appears we have been. Baccyak4H (Yak!) 16:22, 26 February 2009 (UTC)
- No, it is not a convention. It is a mathematical definition, there is only one right answer. Greglocock (talk) 02:00, 27 February 2009 (UTC)
The power spectral density and the frequency spectrum from FFT are different (in different units). I do not know which definition is correct but a whole phrase must be stated "white spectrum" or "white spectral density". —Preceding unsigned comment added by 86.49.12.197 (talk) 13:10, 11 October 2009 (UTC)
TV snow
[edit]Is this the same as the loud sound of a television channel when there is no signal? Daniel Christensen (talk) 19:34, 2 November 2009 (UTC)
I was watching the equaliser on my radio when a station dropped out, and although it is a wideband noise, there is quite a lot of shape to it (pronounced roll-off at each end). This may be the internal circuitry of the radio. So, good question, don't really know. Greglocock (talk) 23:33, 22 November 2009 (UTC)
It is hard to answer this question. It probably is, but under circumstances where you have no other signals which can interfere. This situation is very unlikely on Earth (don't want to use word impossible). Someone will correct me if I'm wrong. I believe what you see (and hear) between television channels MAY BE white noise, but the signal interferences will/should not make it perfectly random and if you change frequency you will end up with some artificial signal of for instance other television channel, this may interfere with white snow you see. 84.242.71.76 (talk) 05:47, 14 May 2015 (UTC)
White Noise (film)
[edit]After seeing the extras on dvd, became involved with my studio, as I am also an artist, and images showed up on my computer that are interpretable. Someone might have broken into my data base, and took images, and used them for entertainment, and there were lots of "accidents" and persons won't come forward and say anything. I know how to use images in interpreting modern media, and also use Native American tracing and tracking. Powder *)75.201.143.93 (talk) 22:20, 22 November 2009 (UTC)
Possible inaccuracy
[edit]Right now the article states that "By having power at all frequencies, the total power of such a signal is infinite and therefore impossible to generate.". While white noise does have infinite power, not all signals that have non-zero power at all frequencies have infinite power.130.234.5.136 (talk) 20:35, 23 October 2010 (UTC)
- I agree with you that there is a problem here. I have taken a stab at improving the paragraph. See if you like it. Binksternet (talk) 20:50, 23 October 2010 (UTC)
Section 4.2
[edit]the N_0 is undefined. so I don't understand the formula. Jackzhp (talk) 14:26, 18 January 2011 (UTC)
statistical white noise
[edit]Statistical white noise does NOT need to be iid. "Weak white noise" is just yt ~ WN(0, sigma^2)" Strong white noise is yt ~iid (0,sigma^2). And, as correctly mentioned, Gaussian white noise is yt ~iid N(0, sigma^2) ("Elements of Forecasting," Francis X. Diebold 4th edition 2007)
- Looking at a couple of books on the net, it seems that the definition and nomenclature varies depending on the author, or perhaps application. This, by the way, is the rule even in the most abstract mathematical fields. Do not take any book, no matter how popular, as being the truth in nomenclature and notation. Wikipedia should try to record all common conventions, without implying that one of them is more "correct" or "official" and any other. Please check the White noise#White noise vector section, it now tries to do that. All the best, --Jorge Stolfi (talk) 21:44, 12 February 2013 (UTC)
Clarification: Other white distributions
[edit]Under Statistical properties is written, "We can therefore find Gaussian white noise, but also Poisson, Cauchy, etc. white noises." I suggest this be changed to, "A Gaussian amplitude distribution could give a white frequency distribution, but so could amplitude distributions that are Poisson, Cauchy, etc.," but am not sure if that covers the meaning of the original sentence. ᛭ LokiClock (talk) 05:47, 23 August 2011 (UTC)
Generalization to two/n dimensions?
[edit]How is the white noise generalized to multiple dimensions? — Preceding unsigned comment added by 93.173.152.222 (talk) 10:39, 20 February 2012 (UTC)
Ahem...
[edit]Who is the genius that compressed the white noise image using a lossy algorithm (JPEG)? (hint: it's not real white noise after the compression). And the genius who put said image in the article? — Preceding unsigned comment added by 85.74.53.83 (talk) 17:45, 23 August 2012 (UTC)
Which image are you referring to? The spectogram or the time history? Greglocock (talk) 23:48, 24 August 2012 (UTC)
- was contributed by Lenilucho (talk · contribs). --Kvng (talk) 14:59, 26 August 2012 (UTC)
- ...And since you can't reproduce the original signal from a spectogram since it is missing the phase information, it makes little odds whether the image is lossless or not. The original complaint was absurd. Greglocock (talk) 22:24, 26 August 2012 (UTC)
- I'm not endorsing the IP's snark or reasoning but a PNG is generally preferred for screenshots like this. If nothing else, it makes text more readable (though there doesn't seem to be a problem with that here). --Kvng (talk) 13:43, 27 August 2012 (UTC)
Problem with new "Generation" section
[edit]" Generating White noise requires outputting a random number through a Digital-to-analog converter. The quality of the white noise will depend on the quality of the Random number generator. There must be a sufficient number of non repeating samples output or else the result will be an audible "
That is a lousy way of generating useful white noise as it will have a uniform amplitude distribution. If the reference in question seriously proposes this method then it is a damn fool reference and should not be trusted. Greglocock (talk) 06:28, 15 September 2012 (UTC)
- I added a ref which contains details about adjusting amplitude distribution. -—Kvng 13:28, 19 September 2012 (UTC)
Gaussian is special, no?
[edit]I added a note that a vector of n independent variables wi has a spherically symmetric distribution only if each wi has a normal distribution. That claim was deleted. I agree that it is distracting here, and belongs to some other article; but was there any other reason for the deletion? (AFAIK it is true, although the proof is slightly non-trivial. No?) --Jorge Stolfi (talk) 22:20, 12 February 2013 (UTC)
- As I linked to in my edit summary (maybe you didn't notice it because I pipelinked it) the entire class of elliptical distributions, of which the normal is a special case, have the spherical property when the individual variables are independent and of equal variances. The ellipticals in general are linear transformations of sphericals. Duoduoduo (talk) 22:49, 12 February 2013 (UTC)
- I may be missing something, but I believe that if a random vector has a distribution of the form
- as you describe, then its components will be independent if and only if is a negative exponential and is diagonal; that is, is a axis-aligned multivariate Gaussian.
For example, let , be a rectangular window , and be the identity. Then is in the unit disk, 0 outside. The PDF of each will be nonzero in the open interval . If and were independent, then would have to be nonzero over the square .
Where did I go wrong? --Jorge Stolfi (talk) 02:39, 13 February 2013 (UTC)
- I may be missing something, but I believe that if a random vector has a distribution of the form
- It's possible that we're using different concepts of spherical symmetry. You seem to have in mind a concept that implies independence, but it appears that your example of a spherical special case of an elliptical distribution does not have independence. But I know that I've read about how any joint elliptical distribution can be subjected to a linear transformation so as to give it spherical symmetry (and vice versa). The abstract of the article Chamberlain, G. (1983). "A characterization of the distributions that imply mean-variance utility functions", Journal of Economic Theory 29, 185-201. doi:10.1016/0022-0531(83)90129-1 begins If there is a riskless asset, then the distribution of every portfolio is determined by its mean and variance if and only if the random returns are a linear transformation of a spherically distributed random vector. I don't have access to the article now, but it goes on to discuss the elliptical distributions in light of that sentence. See also Owen, J., and Rabinovitch, R. (1983). "On the class of elliptical distributions and their applications to the theory of portfolio choice", Journal of Finance 38, 745-752. JSTOR 2328079.
- In this context spherical symmetry, in my understanding, simply means that the iso-density loci in -space are circles if n=2, spheres if n=3, and spheroids if n>3. Duoduoduo (talk) 18:15, 13 February 2013 (UTC)
- Thanks. An elliptic distribution with a suitable linear mapping is spherically symmetric, no question about that. The question is indeed about whether spherical symmetry and statistical independence. AFAIK, the mutivariate Gaussian with covariance matrix is the only spherically symmetric multivariate distribution whose components are statistically independent. If one substitutes "uncorrelated" for "independent", then any elliptic distribution with will do. (But then many other non-elliptic distributions will do too.)
All the best, --Jorge Stolfi (talk) 22:26, 13 February 2013 (UTC)
Proposed rewrite of continuous-time white noise section
[edit]Hi, the section White noise#White random process (white noise), like many articles in Wikipedia, seems to be meant for experts who know the notation (a certain notation?) and the jargon used in the field (a certain field?).
Each wikipedia article and section should be written in such a way to satisfy all "needy customers" -- readers that come to it because they need the information it contains. In this case, I would think that they would be mostly people (like me) with technical background, who know basic calculus, probability, and statistics, and need to know the basics of "white noise" signals; but are not necessarily familiar with the theory and jargon of stochastic processes. Those people also would need a bit of justification ("why is the concept defined that way?")
Please have a look at this draft proposal for that section. Would it be an improvement? (I am not familiar with the field; I hope it is not complete nonsense.)
All the best, --Jorge Stolfi (talk) 03:08, 13 February 2013 (UTC)
- I took the liberty of correcting a parsing error in your draft (the LaTeX didn't like the minus sign that you used). Sorry I don't know anything about that topic so I can't comment on your draft. Duoduoduo (talk) 14:47, 13 February 2013 (UTC)
- Your's is clearly an improvement. It is more accessible and you have included a reference. -—Kvng 16:19, 16 February 2013 (UTC)
- The definition of continuous-time white noise has been replaced by the draft previously posted. Please check it. Is it correct? Is it compatible with commonly accepted definitions? --Jorge Stolfi (talk) 16:45, 27 February 2013 (UTC)
Which should come first, continuous or discrete?
[edit]There are at least three approaches to defining white noise: finite vectors, infinite process with discrete time (DT), and infinite process with continuous time (CT).
In practice all signals are finite, and are often processed/analyzed with rather small windows (say n=1024 or 4096 samples); so the first formulation is the one actually used in practice. However the parameter n is a distraction that is not relevant to the key concept of "white noise".
The infinite DT approach is meant to remove that distraction, at the cost of one unrealistic assumption: that the signal has been going on since minus forever. This assumption creates some deep conceptual problems (for instance it makes "independent" distinct from "unpredictable", and one must be careful when discussing "mean power" etc.); but the model can still be used in practice without running into those potholes. However it still has one distracting parameter, the sampling interval δ; and the discreteness of time causes problems when one wants to model things like resampling.
The CT model is meant to resolve these problems, again at the cost of another unrealistic assumption: namely, that time is a real number, and therefore the signal can be arbitrarily complicated in any interval of time, no matter how short. .
The CT approach is standard in physics, where that cost is understood an widely accepted. For white noise, however, this cost seems prohibitive. While the functions of physics are necessarily continuous, and differentiable whenever the derivative has physical sense, a CT white-noise process must necessarily be totally discontinuous. Worse, its value f(t) at any given time t cannot be a real number. (The distribution of f(t) must have infinite variance, but must be Gaussian, hence its value must be infinite with probability one.). Therefore the mere definition of "CT white noise" already requires machinery from the theory of distributions. Now, anyone who has been through college can understand physical functions, their integrals and derivatives; but few mathematicians, and even fewer scientists, know about distributions and can work with them.
The point of this long preamble is that a Wikipedia reader who is looking for a mathematical definition of "white noise" will probably be ill served if he is given the CT definition first. Methinks that a discrete version should be given first: either the finite one ("white noise vector") or the infinite DT process one; where each sample is still a real number with an ordinary Gaussian distribution.
All the best, --Jorge Stolfi (talk) 19:39, 20 February 2013 (UTC)
- without offering any distinct answer, the best alternative would be the one that is readable to the average encyclopedia reader, rather than an abstract mathematician. If somebody knows enough to care they probably don't need a wiki definition.Greglocock (talk) 04:40, 21 February 2013 (UTC)
- I agree that of course the article should at least start out comprehensible to the average reader who is willing to click on this article, though it's not true that "if somebody knows enough to care they probably don't need a wiki definition" (me for instance as a counterexample -- I'm familiar with some but not other aspects of it).
- I think that in general the finite-dimension discrete-time case is going to be easier for more people to understand, so I think that the white noise vector should come first. Duoduoduo (talk) 13:09, 21 February 2013 (UTC)
Paragraph needs clarification
[edit]The following paragraph was removed from the "Uses" section because the jargon seems unintelligible to readers who are not experts, and I do not know how to fix it. Would someone please clarify it? Anyway it does not seem to be an "use" of white noise.
- In regression analysis, such as ordinary least squares, the validity of hypothesis testing typically is contingent on the underlying shock variables being white noise. If this assumption is violated because there is autocorrelation of the shocks underlying the estimated residuals, this does not bias the OLS coefficient estimates, but their standard errors are estimated with bias (and so the t-scores are biased). If the white noise assumption is violated because the errors are not identically distributed, and specifically because they are heteroskedastic, then again the standard errors, and hence hypothesis tests, are biased.
--Jorge Stolfi (talk) 05:32, 27 February 2013 (UTC)
- Right, "Uses" wasn't a good descriptor; section "Mathematical applications" and subsection "Time series analysis and regression" which appear there now are fine. As for the jargon being unintelligible to those who are not experts, that's how I view much of the rest of the article! The topic is inherently difficult, and after the lede the reader is inevitably going to find it tough going.
- As for the passage you removed, I'm the author if it, so naturally I like the current wording. As far as I can see, the wording of the passage is fine and every piece of jargon is linked to an article explaining it. The only way I can see to possibly improve it would be to parenthetically define most of the linked pieces of jargon, as follows:
- In regression analysis, such as ordinary least squares, the validity of hypothesis testing typically is contingent on the underlying
shock variableserrors (differences between the observed dependent variable value and the value implied by the deterministic true underlying model) being white noise. If this assumption is violated because there is autocorrelation of theshocks underlying the estimated residualsnoise in the underlying model (that is, deviations of the noise from zero in one time period have a tendency to coincide with deviations in another time period), this does not bias (render systematically wrong) the coefficient estimates, but their standard errors (degree of uncertainty of coefficient estimates) are estimated with bias and so the t-scores showing the likelihood of departure of coefficients from zero are biased. If the white noise assumption is violated because the errors are not identically distributed, and specifically because they are heteroskedastic (the noise in some time periods has larger variance than that in other periods), then again the standard errors, and hence hypothesis tests, are biased.
- In regression analysis, such as ordinary least squares, the validity of hypothesis testing typically is contingent on the underlying
- To me this sounds rather wordy, but it does help the reader avoid clicking a lot of other links. Duoduoduo (talk) 16:16, 27 February 2013 (UTC)
- Thanks for the help. However, I cannot find a definition for "shock variable" in the regression analysis or hypothesis test articles. I presume that those are the errors (difference between the value predicted by the regression model and the actual data); is that right?
A broader question is: I understand the goal of regression analysis as being merely to find a description of the data. In that context there should be no distinction between "noise" and "signal", but only between the model's prediction and the error. If the regression is properly done, aren't the errors automatically uncorrelated? In other words, how can one distinguish between correlations in the signal and correlations in the noise? All the best, --Jorge Stolfi (talk) 17:19, 27 February 2013 (UTC)
- Thanks for the help. However, I cannot find a definition for "shock variable" in the regression analysis or hypothesis test articles. I presume that those are the errors (difference between the value predicted by the regression model and the actual data); is that right?
- Sorry, in the above I've now changed "shock variables" to "errors" with some parenthetical explanations.
- The goal of regression is not just a description of the data; it's also to test hypotheses about underlying causal relationships.
- Regression analysis has no concept of "signal" , though I suppose one could find some aspect of regression that might be analogous to signal.
- One must distinguish between "errors" -- the discrepancy between the dependent variable value and the value implied by the deterministic part of the true unobserved underlying model -- and "residuals" -- the discrepancy between the dependent variable value and the value implied by the estimated version of the model. See errors and residuals in statistics. Correlation of the errors, or the absence thereof, is not a feature of how the regression is done, but rather a feature of the noise part of the underlying process that is assumed to have generated the data. However, sometimes a better specification of what explanatory variables to include in the model can lead to a model in which a hypothesis that the errors are uncorrelated cannot be rejected, and (all other things equal) that is a better specification. Duoduoduo (talk) 19:05, 27 February 2013 (UTC)
- Ah, thanks, now I understand, I think. Would this reparaphrase be in the right ballpark:
- In statistics one often assumes that an observed series of "data" values is the sum of a "model" series, generated by a deterministic linear process, and a series of random "noise" values. One common task in that area is regression analysis, whose goal is to infer the parameters of the model process from the observed data, e.g. by least squares. Another common task is to test the hypothesis that the data was generated by a given model. In both situations one typically assumes that the noise values are uncorrelated variables with zero mean and the same Gaussian probability distribution; in other words, that the noise is white. If this assumption is not true (that is, if there is some non-trivial correlation between the noise values at different times) then the estimated model parameters may still be unbiased, but estimates of their uncertainties (such as confidence intervals) will be overly optimistic. This is also true if the noise is not heteroskedastic, that is, it has different variances at different times.
- --Jorge Stolfi (talk) 23:52, 27 February 2013 (UTC)
- My version:
- In statistics and econometrics one often assumes that an observed series of data values is the sum of a series of values generated by a deterministic linear process and a series of random noise values. Then regression analysis is used to infer the parameters of the model process from the observed data, e.g. by ordinary least squares, and to test the hypothesis that each of the parameters is non-zero. Hypothesis testing typically assumes that the noise values are mutually uncorrelated with zero mean and the same Gaussian probability distribution — in other words, that the noise is white. If there is non-zero correlation between the noise values at different times then the estimated model parameters are still unbiased, but estimates of their uncertainties (such as confidence intervals) will be biased (not accurate on average). This is also true if the noise is heteroskedastic — that is, if it has different variances for different data points.
- My version:
-
- Great! That text is now in the article. Thanks! --Jorge Stolfi (talk) 16:25, 28 February 2013 (UTC)
- You're welcome, and thanks for all the work you're doing on the article. I've enjoyed working with you! Duoduoduo (talk) 17:25, 28 February 2013 (UTC)
Continuous-time white noise simulation and whitening
[edit]The section "Random signal transformations" was removed because (a) it seems to be original research and (b) it seems to be incorrect and/or useless. It is temporarily in User:Jorge Stolfi/Temp/White noise simulation and whitening until these issues can be resolved.
The "whitening" is not possible, even in theory, if the input signal has zero power density at some frequency (or frequency band). Even if that is not the case, the whitening filter may not be realizable because it may demand unbounded amplification (consider an input signal with Gaussian-shaped power spectrum) or infinite complexity (since the input spectrum may be infinitely complicated).
As for the "simulation" part, the name is unwarranted since the 1st and 2nd moments do not constrain the signal very much; hence the shape of the synthetic signal may be totally different from the model signal. The questions about physical realizability also remain.
--Jorge Stolfi (talk) 14:57, 27 February 2013 (UTC)
- I think you are safe to assign that lot to the bit bucket. It is not appropriate for wiki, even if it is true. Greglocock (talk) 22:47, 27 February 2013 (UTC)
"Static"?
[edit]Why is white noise also called "static"? Is that derived from "static electricity", and if so, where's the connection? I couldn't find an answer on this in Wikipedia. -- 89.182.88.149 (talk) 08:34, 2 June 2015 (UTC)
== Utterly confusing comment
Both graphs in time and frequency domain.
[edit]It would be good to show the noise pattern both as time and frequency domain. Quaderratistteuer (talk) 19:16, 22 September 2020 (UTC)
Ujn
[edit]Mlm 185.63.219.230 (talk) 22:15, 19 November 2021 (UTC)
"White noise (slang)" listed at Redirects for discussion
[edit]An editor has identified a potential problem with the redirect White noise (slang) and has thus listed it for discussion. This discussion will occur at Wikipedia:Redirects for discussion/Log/2022 August 30#White noise (slang) until a consensus is reached, and readers of this page are welcome to contribute to the discussion. Dronebogus (talk) 11:54, 30 August 2022 (UTC)
"Gaussian white noise" listed at Redirects for discussion
[edit]The redirect Gaussian white noise has been listed at redirects for discussion to determine whether its use and function meets the redirect guidelines. Readers of this page are welcome to comment on this redirect at Wikipedia:Redirects for discussion/Log/2023 July 5 § Gaussian white noise until a consensus is reached. 1234qwer1234qwer4 20:51, 5 July 2023 (UTC)
- Old requests for peer review
- C-Class physics articles
- Mid-importance physics articles
- C-Class physics articles of Mid-importance
- C-Class Professional sound production articles
- Mid-importance Professional sound production articles
- WikiProject Professional sound production articles
- C-Class Statistics articles
- Low-importance Statistics articles
- WikiProject Statistics articles