Thursday, April 12, 2007

Thomas Gold Was Right and Wrong

Often the most original insights in a field of study come from outside. My book research shows that many great minds were polymaths, interested in many subjects. Continental Drift was promoted by Alfred Wegener, who trained as an astronomer and became a leader in meteorology. Though it seems obvious from the fit of coastlines, during Wegener's lifetime few accepted that the continents could move. Today's theory of plate tectonics is a vindication of Wegener.

Astrophysicist Thomas Gold died in 2004. During his long career he made many contributions to physics and astronomy. Unlike many scientists, Gold was willing to risk being wrong. Many of his ideas turned out to be right. When pulsars were discovered by Jocelyn Bell in 1967, some thought they could be signals from ET's. Gold and Fred Hoyle identified them as rapidly spinning neutron stars. The source of neutron stars' rotating beams has remained a mystery, but they could be explained by internal Black Holes.

Gold and Hoyle also promoted a "Steady State" universe. This model universe did not expand--redshift of distant objects was usually explained by a slowing speed of light. The steady state was an alternative to what Hoyle derisively named a "Big Bang." Who knew that this insulting name would catch on? Discovery of the cosmic microwave background in 1965 discounted the steady state. The 2.7K radiation showed that the Universe evolved from a hotter, denser state.

Note that a modern theory of R = ct predicts a Big Bang. Redshifts would indeed be caused by expansion. Change in c would not cause redshifts, but would make high redshifts curve upward. That might lead a naive person to think that the universe was accelerating due to "dark energy."

We were taught in school that petroleum is produced from decomposing fossils, hence the term "fossil fuel." Gold, following some Soviet scientists of the 1950's, took the maverick view that petroleum is abiogenic. Under this theory oil forms deep within the Earth, cooked by Earth's internal heat and aided by bacteria living many kilometres underground. Bacteria would explain the organic compounds found within oil. Under this theory, Earth contains far more petroleum than previously thought and is still producing it.

Was Gold right or wrong about oil? The abiogenic theory is still a minority view, but recent discoveries lend support to Gold. The deepest hole ever dug, the Kola Superdeep Borehole, found bacteria living at depths of 6.7 kilometres. We have found "extremophiles" living in all sorts of inhospitable environments. Moons such as Titan and possibly the planet Mars are still producing methane due to some mysterious process. Earth herself produces an unknown amount of methane, which leaks into Space through volcanic and oceanic vents. Production of these hydrocarbons is indirect evidence supporting Thomas Gold.

The amount of energy produced by Earth's interior is only a guess. The old hypothesis of "radioactive decay" is not adequate to explain even Earth's known heat production. If Earth leaks a large amount of methane, that is even more energy that must be accounted for. Something else is keeping Earth's interior hot, producing vulcanism and also hydrocarbons.

The first stars did not form until hundreds of millions of years after the Big Bang origin. In the early Universe of quasars and active galactic nuclei, the primary source of energy was Black Holes. Hidden from sight, they may still be producing energy. Thomas Gold thought that Earth's internal heat is still producing oil. The petrol in your tank may be energy from a tiny Black Hole.

UPDATE: This doesn't imply that we should waste hydrocarbons. Even if Earth is still producing oil, humans have been using it far faster.

Labels: , , , ,

20 Comments:

Blogger Rudy Wellsand said...

This comment has been removed by a blog administrator.

7:10 AM  
Blogger Kea said...

Cool post! I was wondering a bit about the Black Gold. Of course, this would be no excuse to continue polluting the air.

9:58 AM  
Blogger L. Riofrio said...

Very true, Kea. Even if Earth is still producing oil, we are taking it out much faster. We also put too much junk in the air.

10:19 AM  
Anonymous Anonymous said...

May I try to post something here? This is meant in the best possible way - I see what you are trying to do, that you are intelligent and care about physics, but I also see that you're going to have (and have had?) trouble communicating with more mainstream physicists. Maybe you don't care about that, I don't know - but here's my attempt to help.

When you say that c is changing, you have to be very careful to define what you mean. The reason is that c is a dimensionful quantity, with units of length/time. In fact, if you look here:

http://www.mel.nist.gov/div821/museum/timeline.htm

you will see that *by definition* the speed of light can not change, because the meter itself is defined in terms of it! So instead, you would have to say that the meter is changing with time - but this is quite awkward.

Of course you may object that this is just a definition, an arbitrary choice of units - but that's precisely the point. If you say c is changing, you had better specify with respect to what meter stick and what clock, or else the statement is meaningless.

I think you will find, if you proceed from there in your theory but choose the standard units in which c is constant, that your two equations reduce to the standard solution for a matter dominated universe, namely R(t) = B t^(2/3), where R(t) is the scale factor of the universe, and B is a constant. That solution doesn't actually quite match the cosmological data, but perhaps that's best left for another comment.

3:17 PM  
Blogger nige said...

anonymous,

See http://www.iop.org/EJ/abstract/0034-4885/66/11/R04, a publication in Rep. Prog. Phys. 66 2025-2068 states:

"We review recent work on the possibility of a varying speed of light (VSL). We start by discussing the physical meaning of a varying-c, dispelling the myth that the constancy of c is a matter of logical consistency. ..."

The fixed velocity of light was only accepted in 1961, and it is fixed by consensus not by science.

Similar consensus fixes are Benjamin Franklin's guess that there is an excess of free electric charges at the anode of a battery which he labelled positive for surplus, just based on guesswork.

Hence, now we all have to learn that in electric circuits, electrons flow in the opposite direction (i.e., in the direction from - to +) to Franklin's conventional current (+ toward -).

This has all sorts of effects you have to be aware of. Electrons being accelerated upwards in a vertical antenna consequently results in a radiated signal which starts off with a negative half cycle, not a positive one, because electrons in Franklin's scheme carry negative charge.

Similarly, the idea of a fixed constant speed of light was appealing in 1961, but it would be as unfortunate to argue that the speed of light can't change because of a historical consensus as to insist that that electrons can't flow around a circuit from the - terminal to the + terminal of a battery, because Franklin's consensus said otherwise.

Sometimes you just need to accept that consensus doesn't take precedence over scientific facts. What matters is not what a group of people decided was for the best in their ignorance 46 years ago, but what is really occurring.

The speed of light in vacuum is hard to define because it's clear from Maxwell's equations that light depends on the vacuum, which may be carrying a lot of electromagnetic field or gravitational field energy per cubic metre, even when there are no atoms present.

This vacuum field energy causes curvature in general relativity, deflecting light, but it also helps light to propagate.

Start off with the nature of light given by Maxwell's equations.

In empty vacuum, the divergences of magnetic and electric field are zero as there are no real charges. Hence the two Maxwell divergence equations are irrelevant and we just deal with the two curl equations.

For a Maxwellian light wave where E field and B field intensities vary along the propagation path (x-axis), Maxwell’s curl equation for Faraday’s law reduces to simply: dE/dx = -dB/dt, while Maxwell's curl equation for Maxwell’s equation for the magnetic field created by vacuum displacement current is: -dB/dx = m*e*dE/dt, where m is magnetic permeability of space, e is electric permittivity of space, E is electric field strength, B is magnetic field strength. To solve these simultaneously, differentiate both:

d^2 E /dx^2 = - d^2 B/(dx*dt)

-d^2 B /(dx*dt) = m*e*d^2 E/dt^2

Since d^2 B /(dx*dt) occurs in each of these equations, they are equivalent, so Maxwell got dx^2/dt^2 = 1/(me^{1/2}, so c dx/dt = 1/(me)^{1/2} = 300,000 km/s.

However, there's a problem introduced by Maxwell's equation -dB/dx = m*e*dE/dt, where e*dE/dt is the displacement current.

Maxwell's idea is that an electric field which varies in time as it passes a given location, dE/dt, induces the motion of vacuum charges along the electric field lines while the vacuum charges polarize, and this motion of charge constitutes an electric current, which in turn creates a curling magnetic field, which by Faraday's law of induction completes the electromagnetic cycle of the light wave, allowing propagation.

The problem is that the vacuum doesn't contain any mobile virtual charges (i.e. virtual fermions) below a threshold electric field of about 10^18 v/m, unless the frequency is extremely high.

If the vacuum contained charge that is polarizable by any weak electric field, then virtual negative charges would be drawn to the protons and virtual positive charges to electrons until there was no net electric charge left, and atoms would no longer be bound together by Coulomb's law.

Renormalization in quantum field theory shows that there is a limited effect only present at very intense electric fields above 10^18 v/m or so, and so the dielectric vacuum is only capable of pair production and polarization of the resultant vacuum charges in immensely strong electric fields.

Hence, Maxwell's "displacement current" of i = e*dE/dt amps, doesn't have the mechanism that Maxwell thought it had.

Feynman, who with Schwinger and others discovered the limited abound vacuum dielectric shielding in quantum electrodynamics when inventing the renormalization technique (where the bare core electron charge is stronger than the shielded charge seen beyond the IR cutoff, because of the effect of shielding by polarization of the vacuum out to 1 fm radius or 10^18 v/m), should have solved this problem.

Instead, Feynman wrote:

‘Maxwell discussed ... in terms of a model in which the vacuum was like an elastic ... what counts are the equations themselves and not the model used to get them. We may only question whether the equations are true or false ... If we take away the model he used to build it, Maxwell’s beautiful edifice stands...’ – Richard P. Feynman, Feynman Lectures on Physics, v3, 1964, c18, p2.

Feynman is correct here, and he does go further in his 1985 book QED, where he discusses light from the path integrals framework:

‘Light ... "smells" the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ - Feynman, QED, Penguin, 1990, page 54.

I've got some comments about the real mechanism for Maxwell's "displacement current" from the logic signal cross-talk perspective here, here and here.

The key thing is in a quantum field theory, any field below the IR cutoff is exchange radiation with no virtual fermions appearing (no pair production). The radiation field has to do the work which Maxwell thought was done by the displacement and polarization of virtual charges in the vacuum.

The field energy is sustaining the propagation of light. Feynman's path integrals shows this pretty clearly too. Professor Clifford Johnson kindly pointed out here:

‘I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths... then drill more slits and so more paths... then just drill everything away... leaving only the slits... no mask. Great way of arriving at the path integral of QFT.’

This is also the approach in Professor Zee's "Quantum Field Theory in a Nutshell" (Princeton University Press, 2003), Chapter I.2, Path Integral Formulation of Quantum Mechanics.

The idea is that light can go on any path and is affected most strongly by neighboring paths within a wavelength (transverse spatial extent) of the line the photon appears to follow.

What you have to notice, however, is that photons tend to travel between fermions. So does exchange radiation (gauge boson photons) that cause the electromagnetic field. So fermions constitute a network of nodes along which energy is being continuously exchanged, with observable photons of light, etc., travelling along the same paths as the exchange radiation.

It entirely possible that light speed in the vacuum depends on the energy density of the background vacuum field (which could vary as the universe expands), just as the speed of light is slower in glass or air than in a vacuum.

Light speed however tends to slow down when the energy density of the electromagnetic fields through which is travels is higher: hence it slows down more in dense glass than in air. However, it is well worth investigating in more detail.

3:54 AM  
Anonymous Anonymous said...

Dear Nige,

There's nothing logically inconsistent about the speed of light changing with time - I didn't say that anywhere in my comment. I simply pointed out that the speed of light, written in meters/second, is by definition constant, because the meter is itself defined by the speed of light. My point was that stating that a dimensionful constant changes with time isn't by itself meaningful - you have to specify changing with respect to what. In fact physicists often set c=1, which simply means they meaure all speeds in units of the speed of light. One can always choose do that, although it isn't always the most convenient option.

The lesson is that it's usually best to take ratios, and put all the time dependence into dimensionless numbers (for example the speed of light divided by the speed of gravity waves in a non-Lorentz invariant theory).

Furthermore I mentioned that after glancing at the equations on this blog, it looks very much like one could simply choose a more standard set of units (for example meters/second) for c, in which it's constant, and one would then obtain the usual results for matter dominated FRW cosmology.

One also has to bear in mind that there are incredibly stringent experimental bounds on the breaking of Lorentz symmetry, as Magueijo refers to at the end of the abstract you linked to. Any theory where c changes (in a meaningful way, not as the result of an odd choice of units) will break Lorentz invariance and be subject to such constraints.

4:48 AM  
Blogger L. Riofrio said...

Anon, a definition of the meter based upon c is putting too much faith in technology. Better to have a big stick at NIST and define the meter that way. Einstein was quite insistent on using "rods and clocks."

This cosmology does indeed produce R ~ t^{2/3}. I have already written the paper on experimental constraints, and this comsology fits the supernova data precisely.

5:26 AM  
Blogger nige said...

"One also has to bear in mind that there are incredibly stringent experimental bounds on the breaking of Lorentz symmetry, as Magueijo refers to at the end of the abstract you linked to. Any theory where c changes (in a meaningful way, not as the result of an odd choice of units) will break Lorentz invariance and be subject to such constraints." - Anonymous

Lorentz invariance is allegedly broken in many ways already.

First, as Smolin and others say in discussing "doubly special relativity", quantum field theory seems to have some fixed minimum grain size in the vacuum. That breaks Lorentz invariance because the length scale of the grain size doesn't obey Lorentz invariance.

Ie, the Lorentz transformation contraction apply to the vacuum grain size, which is usually taken to an absolute size irrespective of motion of the observer, such as Planck length.

That's the basis of Smolin's argument, described on p227 of his book "The Trouble with Physics."

I don't find Smolin's argument there totally convincing, purely because all the Planck scale length is supposed to be the smallest length you can obtain from physical units but it isn't. If you take the black hole event horizon radius 2GM/c^2, for an electron mass M this distance is far smaller than the Planck scale.

Nobody has any theoretical, let alone experimental, basis for the Planck scale. There are loads of ways of combining fundamental constants to get distances. So until there is evidence, say from a particle accelerator the size of the galaxy that can probe the Planck scale, it's speculative.

But there are other indications that Lorentz invariance is just the result of a physical mechanism and not a universal law.

Quantum field theory implies that the number of virtual vacuum particles an observer interacts with is not independent of his or her motion, but depends on absolute motion:

"... what we learned has important applications to the study of quantum fields in curved backgrounds. In Quantum Field Theory in Minkowski space-time
the vacuum state is invariant under the Poincare group and this, together with the covariance of the theory under Lorentz transformations, implies that all inertial observers agree on the number of particles contained in a quantum state. The breaking of such invariance, as happened in the case of coupling to a time-varying source analyzed above, implies that it is not possible anymore to define a state which would be recognized as the vacuum by all observers.

"This is precisely the situation when fields are quantized on curved backgrounds. ..."

- p 85 of Introductory Lectures on Quantum Field Theory by Luis Alvarez-Gaume and Miguel A. Vazquez-Mozopage, http://arxiv.org/abs/hep-th/0510040 (Emphasis added in bold to reason why Lorentzian invariance is violated by quantum field theory, which is the fundamental physics of the standard model of particles.)

In addition, the whole basis of general relativity is a move away from the fixed Lorentzian background dependence of special relativity; it is a move away from a definite Lorentzian metric. In general relativity, the metric is the result of the field equations for specified conditions.

About 99.9% of people using general relativity and writing about it don't understand Einstein's general covariance. So you get "Lorentzian covariance" being discussed. However, general covariance, which is the basis of general relativity, is actually very simple, as I found out in reading Einstein's original paper:

The meaning of such a tensor is revealed by subscript notation, which identify the rank of tensor and its type of variance.

‘The special theory of relativity... does not extend to non-uniform motion ... The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity... The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant).

– Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916. (Emphasis here is Einstein's own italics in the original paper.)

So the widely held idea of "Lorentzian covariance" is just a nonsense. What matters is general covariance, which is background independence, i.e., the Einstein field equation without a fixed assumed metric.

The metric is the result of solving the field equation.

The Lorentz contraction is a physical result of moving a charge in an exchange radiation field. You are going to get directional compressions. It's a consequence of Yang-Mills exchange radiation under certain conditions, not a universal law. There's a simple analogy to the gravitational contraction you get in a mass field. In each case, exchange radiation is causing contractions in the direction of gravitational field lines or the direction of motion relative to some external observer.

Really, general relativity is background independent: the metric is always the solution to the field equation, and can vary in form, depending on the assumptions used because the shape of spacetime (the type and amount of curvature) depends on the mass distribution, cc value, etc. The weak field solutions like the Schwarzschild metric have a simple relationship to the FitzGerald-Lorentz transformation. Just change v^2 to the 2GM/r, and you get the Schwarzschild metric from the FitzGerald-Lorentz transformation, and this is on the basis of the energy equivalence of kinetic and gravitational potential energy:

E = (1/2)mv^2 = GMm/r, hence v^2 = 2GM/r.

Hence gamma = (1 – v^2 / c^2)^{1/2} becomes gamma = (1 – 2GM/ rc^2)^{1/2}, which is the contraction and time dilation form of the Schwarzschild metric.

Einstein’s equivalence principle between inertial and gravitational mass in general relativity when combined with his equivalence between mass and energy in special relativity, implies that the inertial energy equivalent of a mass (E = 1/2 mv^2) is equivalent to the gravitational potential energy of that mass with respect to the surrounding universe (i.e., the amount of energy released per mass m if the universe collapsed, E = GMm/r, where r the effective size scale of the collapse). So there are reasons why the nature of the universe is probably simpler than the mainstream suspects:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

- R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

5:35 AM  
Anonymous Anonymous said...

Dear L. Riofrio,

I'm afraid the supernova data is not in fact consistent with R~t^{2/3}. Indeed it was precisely the supernova data that first showed that the universe is no longer matter dominated, and that the expansion is accelerating. If your solution is equivalent to that of matter-dominated FRW, as it looks, you will find thousands of papers explaining why that simply does not fit the data. It was just this mismatch that forced cosmologists to posit the existence of dark energy.

Nige,

there are many things in your post that are incorrect, but I don't have the time or inclination to explain why. I will just make one comment - it is certainly not the case that (Lorentz invariant) quantum field theory by itself has a minimum size or violates Lorentz invariance spontaneously, and that is not what Alverez-Gaume and V-M are saying in the quote you give. They are saying that IF Lorentz invariance is broken, for example by a time dependent source or a curved background, THEN there is no longer an invariant notion of particle or a fixed particle number, which is true (a particle is a thing defined by a representation of the Lorentz group). Many of the best constraints on the size of Lorentz violation come from that sort of effect, which we do not observe.

One other thing you might recall is that all smooth manifolds - including all the solutions to the equations of general relativity that we can control - are locally flat (and therefore locally Lorentz invariant). Since the one we live in is very big, we see physics that is almost exactly Lorentz invariant, and the more so the higher the energy or smaller the scale of the experiment.

Furthermore there are some features that all solutions to GR have in common, even on large scales where they are not at all Lorentz invariant - for example that all photons will travel on the same (null) geodesics, regardless of their frequency, which wouldn't be true in a less symmetric theory. The coinciding arrival times of photons of differing energy from extremely distant events is therefore another very strong constraint, one that almost all Lorentz violating theories will fail.

10:18 AM  
Blogger nige said...

This comment has been removed by the author.

11:37 AM  
Blogger nige said...

"I'm afraid the supernova data is not in fact consistent with R~t^{2/3}. Indeed it was precisely the supernova data that first showed that the universe is no longer matter dominated, and that the expansion is accelerating. If your solution is equivalent to that of matter-dominated FRW, as it looks, you will find thousands of papers explaining why that simply does not fit the data. It was just this mismatch that forced cosmologists to posit the existence of dark energy." - anonymous

You may well have reason to be afraid, because you're plain wrong about dark energy! Louise's result R ~ t^{2/3} for the expanding size scale of the universe is indeed similar to what you get from the Friedmann-Robertson-Walker metric with no cosmological constant, however that works because she has varying velocity of light which affects the redshifted light distance-luminosity relationship and the data don't show that the expansion rate of the universe is slowing down because of dark energy, as a Nobel Laureate explains:

‘the flat universe is just not decelerating, it isn’t really accelerating’

- Professor Phil Anderson, http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

Louise's main analysis has a varying light velocity which affects several relationships. For example, the travel time of the light will be affected, influencing the distance-luminosity relationship.

What prevents long-range gravitational deceleration isn't dark energy.

All the quantum field theories of fundamental forces (the standard model) are Yang-Mills, in which forces are produced by exchange radiation.

The mainstream assumes that quantum gravity will turn out similarly. Hence, they assume that gravity is due to exchange of gravitons between masses (quantum gravity charges). In the lab, you can’t move charges apart at relativistic speeds and measure the reduction in Coulomb’s law due to the redshift of exchange radiation (photons in the case of Coulomb’s law, assuming current QED is correct), but the principle is there. Redshift of gauge boson radiation weakens its energy and reduces the coupling constant for the interaction. In effect, redshift by the Hubble law means that forces drop off faster than the inverse-square law even at low energy, the additional decrease beyond the geometric divergence of field lines (or exchange radiation divergence) coming from redshift of exchange radiation, with their energy proportional to the frequency after redshift, E=hf.

The universe therefore is not like the lab. All forces between receding masses should, according to Yang-Mills QFT, suffer a bigger fall than the inverse square law. Basically, where the redshift of visible light radiation is substantial, the accompanying redshift of exchange radiation that causes gravitation will also be substantial; weakening long-range gravity.

When you check the facts, you see that the role of “cosmic acceleration” as produced by dark energy (the cc in GR) is designed to weaken the effect of long-range gravitation, by offsetting the assumed (but fictional!) long range gravity that slows expansion down at high redshifts.

In other words, the correct explanation according to current mainstream ideas about quantum field theory is that the 1998 supernovae results, showing that distant supernovae aren’t slowing down, is due to a weakening of gravity due to the redshift and accompanying energy loss by E=hf of the exchange radiations causing gravity. It’s simply a quantum gravity effect due to redshifted exchange radiation weaking the gravity coupling constant G over large distances in an expanding universe.

The error of the mainstream is assuming that the data are explained by another mechanism: dark energy. Instead of taking the 1998 data to imply that GR is simply wrong over large distances because it lacks quantum gravity effects due to redshift of exchange radiation, the mainstream assumed that gravity is perfectly described in the low energy limit by GR and that the results must be explained by adding in a repulsive force due to dark energy which causes an acceleration sufficient to offset the gravitational acceleration, thereby making the model fit the data.

Back to Anderson's comment, “the flat universe is just not decelerating, it isn’t really accelerating”, we find supporting this and proving that the cosmological constant must vanish in order that electromagnetism be unified with gravitation, is Lunsford’s unification of electromagnetism and general relativity on the CERN document server at http://cdsweb.cern.ch/search?f=author&p=Lunsford%2C+D+R

Lunsford’s paper was censored off arxiv without explanation.

Lunsford had already had it published in a peer-reviewed journal prior to submitting to arxiv. It was published in the International Journal of Theoretical Physics, vol. 43 (2004) no. 1, pp.161-177. This shows that unification implies that the cc is exactly zero, no dark energy, etc.

The way the mainstream censors out the facts is to first delete them from arxiv and then claim “look at arxiv, there are no valid alternatives”.

"it is certainly not the case that (Lorentz invariant) quantum field theory by itself has a minimum size or violates Lorentz invariance spontaneously," - anonymous

You haven't read what I wrote. I stated precisely where the problem is alleged to be by Smolin, which is in the fine graining.

In addition, you should learn a little about renormalization and Wilson's approach to that, which is to explain the UV cutoff by some grain size in the vacuum - simply put, the reason why UV divergences aren't physically real (infinite momenta as you go down toward zero distance from the middle of a particle) is that there's nothing there. Once you get down to size scales smaller than the grain size, there are no loops.

If there is a grain size to the vacuum - and that seems to be the simplest explanation for the UV cutoff - that grain size is absolute, not relative to motion. Hence, special relativity, Lorentzian invariance is wrong on that scale. But hey, we know it's not a law anyway, there's radiation in the vacuum (Casimir force, Yang-Mills exchange radiation, etc.), and when you move you get contracted by the asymmetry of that radiation pressure. No need for stringy extradimensional speculations, just hard facts.

The cause of Lorentzian invariance is a physical mechanism, and so the Lorentzian invariance ain't a law, it's the effect of a physical process that operates under particular conditions.

"... and that is not what Alverez-Gaume and V-M are saying in the quote you give." - anonymous

I gave the quote so you can see what they are saying by reading the quote. You don't seem to understand even the reason for giving a quotation. The example they give of curvature is backed up by other stuff based on experiment. They're not preaching like Ed Witten:

‘String theory has the remarkable property of predicting gravity.’ - Dr Edward Witten, M-theory originator, Physics Today, April 1996.

"One other thing you might recall is that all smooth manifolds - including all the solutions to the equations of general relativity that we can control - are locally flat (and therefore locally Lorentz invariant)." - anonymous

Wrong, curvature is not flat locally in this universe, due to something called gravity, which is curvature and occurs due to masses, field energy, pressure, and radiation (all the things included in the stress-energy tensor T_ab). Curvature is flat globally because there's no long range gravitational deceleration.

Locally, curvature has a value dependent upon the gravitation field or your acceleration relative to the gravitational field.

The local curvature of the planet earth is down to the radius of the earth being contracted by (1/3)MG/c^2 = 1.5 mm in the radial but not the transverse direction.

So the radius of earth is shrunk 1.5 mm, but the circumference is unaffected (just as in the FitzGerald-Lorentz contraction, length is contracted in the direction of motion, but not in the transverse direction).

Hence, the curvature of spacetime locally due to the planet earth is enough to violate Euclidean geometry so that circumference is no longer 2*Pi*R, but is very slightly bigger. That's the "curved space" effect.

Curvature only exists locally. It can't exist globally, throughout the universe, because over large distances spacetime is flat. It does exist locally near masses, because curvature is the whole basis for describing gravitation/acceleration effects in general relativity.

Your statement that spacetime is flat locally is just plain ignorance because in fact it isn't flat locally due to spacetime curvature caused by masses and energy.

11:38 AM  
Blogger nige said...

A correction to one sentence above:

...Wrong, spacetime is not flat locally in this universe, due to something called gravity, which is curvature and occurs due to masses, field energy, pressure, and radiation (all the things included in the stress-energy tensor T_ab). ...

11:47 AM  
Anonymous Anonymous said...

Well, this will be my last comment. The only point I wanted to make was that L. Riofrio might find it easier to discuss her ideas with other physicists if she figured out how to express them in more conventional language. A good start is to choose units where the speed of light is constant. Once that is done, it appears that her equations reduce to R~t^{2/3}. I thought she had agreed with that in her comment above - perhaps that's not what she meant, but if it was, there seems to be a problem, as that model is solidly ruled out by the data.

Nige,

thanks, but I'm not interested in arguing with you. I don't think it will be productive for either of us. If you want to learn something, take your favorite metric (which can be a solution to GR with or without a non-zero T_{\mu \nu}, you choose), expand it around any non-singular point, and you will discover it is indeed locally flat (locally flat doesn't mean flat everywhere - it means flat spacetime is a good approximation to it close to any given point). Or if you are more geometrically inclined, read about tangent spaces to manifolds - or just think about using straight tangent lines to approximate a small part of a curvy line, and you'll get the idea.

12:21 PM  
Blogger L. Riofrio said...

It does indeed reduce to R ~ t^{2/3}, which ought to mean something. Nice comments, nige.
Earth is locally flat, but curved on the large scale.

3:22 PM  
Blogger nige said...

"I don't think it will be productive for either of us. If you want to learn something, take your favorite metric (which can be a solution to GR with or without a non-zero T_{\mu \nu}, you choose), expand it around any non-singular point, and you will discover it is indeed locally flat (locally flat doesn't mean flat everywhere - it means flat spacetime is a good approximation to it close to any given point). Or if you are more geometrically inclined, read about tangent spaces to manifolds - or just think about using straight tangent lines to approximate a small part of a curvy line, and you'll get the idea." - anonymous

Anonymous, even if you take all the matter and energy out of the universe in order to avoid curvature and make it flat, you don't end up with flat spacetime because spacetime disappears itself, in the mainstream picture.

You can't generally say that on small scales spacetime is flat, because that depends how far you are from matter.

Your analogy of magnifying the edge of a circle until it looks straight as an example of flat spacetime emerging from curvature as you go to smaller scales is wrong: on smaller scales gravitation is stronger, and curvature is greater. This is precisely the cause of the chaos of spacetime on small distance scales, which prevents general relativity working as you approach the Planck scale distance!

In the quantum field theory you get down to smaller and smaller size scales, far from spacetime getting smoother as your example, it gets more chaotic:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

- R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

11:22 PM  
Blogger nige said...

This comment has been removed by the author.

11:38 PM  
Blogger nige said...

anonymous, your argument about flat spacetime curvature on small scales requires putting a uniform matter distribution into T_ab which is the sort of false approximation that leads to misunderstandings.

Mass and energy are quantized, they occur in lumps. They're not continuous and the error you are implying is the statistical one of averaging out discontinuities in T_ab, and then falsely claiming that the flat result on small scales proves spacetime is flat on small scales.

No, it isn't It's quantized. It's just amazing how much rubbish comes out of people who don't understand physically that a statistical average isn't proof that things are continuous. As an analogy, children are integers, and the fact that you get 2.5 kids per family as an average (or whatever the figure is), doesn't disprove the quantization.

You can't argue that a household can have any fractional number of children, because the mean for a large number of households is a fraction.

Similarly, if you put an average into T_ab as an approximation, assuming that the source of gravity is of uniform density you're putting in an assumption that doesn't hold on small scales, only on large scales. You can't therefore claim that locally spacetime is flat. That contradicts what we know about the quantization of mass and energy. Only on large scales is it flat.

11:41 PM  
Blogger L. Riofrio said...

Lots of misunderstandings out there. It is disappointing when someone who doesn't understand something insists that it can't be right.

8:07 AM  
Blogger Geologist said...

Surely Dr. Thomas Gold are Right!

3:54 PM  
Anonymous Ted Green said...

Ms. Rifrio, have you read Barry Setterfield opinion on light decay?

9:45 AM  

Post a Comment

Links to this post:

Create a Link

<< Home

Locations of visitors to this page