EM Drive System No Longer Further Confirmed

He did debunk the theory, but if, and this is an enormous if, it works practically anyway, then it's still going to be cool.
Cooler, in fact. If it works despite the lack of a solid theory, then it would demand explanation. (I think it's misleading to call what's been posted the "theoretical basis", when this is experimentally based.) But that's getting ahead of ourselves.
 
Last edited:
I'm honestly not terribly certain what's going on (I won't be taking Physics 'til next semester, and it's a long road from Physics 1 to this kind of stuff) so I'd appreciate the help.
Some more elaborated notes on the Bohr radius section follow, as well as a little comparative trip back to 2007.

For convenience, here's one of the relevant pages from the 'Principles of Q-thruster Operation' section [pdf].
Though uncertainty and error propagation can get complicated in science, for the simple arithmetic we'll be considering, consideration of significant figures is quite sufficient. This is commonly taught in high-school science classes (well, at least chemistry and physics). Anyway, since Sonny claims to derive the Bohr radius to three digits of accuracy, a0​ = 5.29×10-11​ m, through a formula in the form r = X1/3​, that means he must know X to about three digits of accuracy too, and consequently the terms that go into it as well. But in fact most of the quantities involved don't match Sonny's values to more than one digit of accuracy.

Conclusion 1: Sonny claims that his calculations are about a hundred times more accurate than they actually are.
Conclusion 2: Sonny very probably cherry-picked the numbers that miraculously make "exact match" for the final answer.

This includes:
— The proton radius, which Sonny claims is 1.20 fm, but is actually under 0.88 fm.
• The first step is to calculate a quasi-classical density for the hydrogen nucleus. The radius of the hydrogen atom nucleus is given as R0​=1.2x10-15​m (R=R0​·A1/3​ where R0​ = 1.2x10-15​m and A is the atomic number - these are experimentally determined by electron scattering).
He's claiming that the radius of the nucleus of a hydrogen atom is 1.20 fm, because that's the claimed accuracy of his derivation of the Bohr radius. The justification is a scaling formula. But this scaling is based on the assumption of constant nuclear density, which is rather unlikely to hold to this accuracy—and indeed it does not. Wikipedia notes that R ≈ R0​A1/3​ holds for heavier nuclei but the front coefficients can vary 1.2-1.5 fm. But just so we're not reliant on it, let's check a popular textbook:
Griffiths 'Intoduction to Quantum Mechanics' 2nd ed. said:
Problem 8.4 Calculate the lifetimes of U238​ and Po212​... Hint: The density of nuclear matter is relatively constant, ... . Empirically,
r1​ ≅ (1.07 fm) A1/3​
If anything, wikipedia is understating how inaccurate this formula is. If we wish to dig deeper, a more accurate empirical formula was developed by Elton in 1961:
R ≈ (R0​ + R1​A-2/3​ + R2​A-4/3​)A1/3​,​
the basic form of which is still used and referenced by scholarly articles today, though the details may be different from Elton's original.

Anyway, the point is that we cannot rely on this simple-minded scaling that assuming a constant nuclear density across all nuclei to be anywhere near that accurate. Indeed, the proton radius is below 0.88 fm [NIST], so really we have only one significant figure of accuracy if we insist on using the scaling formula down to hydrogen. And though again, there's some controversy over the proton radius, this value is actually near the upper end of the competing measurements, and it's still nowhere close to what Sonny is using.
— The mass-energy density of the universe. Sonny uses 9.90×10-27​ kg/m³, which is somewhat far away from the maximum likelihood value of 9.13×10-27​ kg/m³; furthermore, the actual measurements are rather uncertain, so three digits is way more than is actually appropriate if we're using it to derive something.
• Using ρv​ = (2/3)*9.9x10-27​ kg/m³, ...
Where does this number even come from? Well, the 9.90×10-27​ kg/m³ is about overall density of the universe, and this must be the interpretation because that's what we get from the first Friedmann equation. The universe is spatially flat, so it is also the critical density
ρc​ = 3H²/(8πG).​
Solving for the Hubble parameter H with Sonny's value for the density, this is gives
H = 72.6 km/s/Mpc.​
Well, damn! Do we really know the Hubble parameter to one part in a thousand? Of course we don't. Referring again to the nine-year WMAP results, the maximum likelihood value is 69.7 km/s/Mpc, which gives 9.13×10-27​ kg/m³, i.e. quite far away from Sonny's value. Of course, the actual measurements have rather wide error bars (see p. 9), so once again it's completely inappropriate to pretend that the results have anywhere near the accuracy that Sonny claims.
— Sonny's claim that the vacuum energy density is (2/3)(9.90×10-27​ kg/m³) is actually a pretty good estimate, but it's a result of cancellation of errors (more on the 2/3 factor later), and is rather indicative of the cherry-picking approach. As usual, it also involves an inappropriate level of claimed accuracy.
If we go by maximum likelihood values again just for simplicity, then the vacuum energy fraction is 0.718, not 2/3, but this error mostly compensates for the one on total energy density (as in the previous section).

———

The apparent story of this 2/3 is actually pretty 'interesting'.

Back in the STAIF 2007 presentation [pdf], Sonny observed the following numerical coincidence (T0​ ≈ 13.7 Gyr is age of the universe in current epoch):
vac​c²}{4π(cT0​)²} = {c4​/G}​
{Vacuum Energy Density}{Area of horizon sphere} = {Planck force}​
The reason it's a coincidence is simple: as the vacuum energy density is constant while the age of the universe isn't, this is only true in the current epoch (and not really even that; it's not an impressively close match). Instead of realizing the obvious, Sonny concluded that since this can be algebraically rearranged into
G = 1/(ρvac​4πT0​²),​
this must mean that the gravitational constant is dependent on the cosmological conditions. But look at what it actually says: G is inversely proportional to the square of the age of the universe. What does that mean? Is gravity radically stronger in the past? Because that's not at all what we observe, and in any case that's a mighty curious conclusion to draw from the fact that terms can be algebraically re-arranged!

Conclusion 3: Sonny's approach is based on numerological coincidences with completely nonsensical justifications.

Somewhere after that, Sonny must have realized this makes no bloody sense. Enter the first Friedmann equation, which for a flat (k = 0) universe is:
H² = (8πG/3)ρtot​,​
where ρtot​ is the total energy density (including vacuum energy). Aha! If we
pretend that the age of the universe is the Hubble time tH​ = 1/H, and
pretend that the vacuum energy is ρvac​ = (2/3)ρtot​,
then we get exactly Sonny's Planck-force equation! Problem solved!

The age of the universe is vaguely close to the Hubble time, but only at the present epoch, so it's a coincidence that won't hold in the future and hasn't in the past. The fraction of the total energy density that the vacuum makes up is rather far from 2/3 (it's about 72%), and in any case is time-dependent because matter and radiation get diluted as the universe expands. In the future, it will go higher and higher.

Conclusion 4: Sonny has a predetermined numerological conclusion in mind and is simply fishing for completely ad hoc reasons to justify it.
 
From what I've read (mostly via NASASpaceFlight), none of the team members think it's reactionless.
Not reactionless as in not reacting with something, that's basic Newton's Third. Reactionless as in "do not need to carry a shit-ton of reaction mass with you". I may have forgotten almost everything I learned in High School, but I was once quite good at basic science, and I haven't let it decay quite that far.

Still, given what's been said later in the thread, it's looking really really unlikely that it is the real thing. I can still hope, though. Although if it is real, it's for damn sure their theoretical explanation isn't.
 
So we're at full on "We have our conclusion, lets build backwards from there" territory. I mean, nothing so far saying this doesn't work, beyond the fact it fits no functioning model, but the same argument can be made for invisible fairies living under toadstools.

And no one take invisible fairies very seriously.
 
Not reactionless as in not reacting with something, that's basic Newton's Third. Reactionless as in "do not need to carry a shit-ton of reaction mass with you". I may have forgotten almost everything I learned in High School, but I was once quite good at basic science, and I haven't let it decay quite that far.

Still, given what's been said later in the thread, it's looking really really unlikely that it is the real thing. I can still hope, though. Although if it is real, it's for damn sure their theoretical explanation isn't.
So, "ReMass-less"
 
The model they try to describe it with is reaction-less by any other name. Pushing off virtual particles that pop into existence purely to allow for momentum to be imparted which then annihilate is the same as being reaction less but with a case of special pleading.
Is it pulling energy out of nowhere or just not bothering with reaction mass?
 
Is it pulling energy out of nowhere or just not bothering with reaction mass?

The second. They manage to bamboozle the universe for reaction mass is their theory. It's like the physics universe of that roommate that always manages to convince everyone that he will have the rent money next month.....and everyone keeps believing him.

Theoretical model....sounds like bullshit to me and I waltz thru peswiki.com once or twice a month so I have become remarkably adept at spotting bullshit science.

Engineering and practically....crossing my fingers this is some crazy bullshit that actually works.
 
Engineering and practically....crossing my fingers this is some crazy bullshit that actually works.

Don't count on it.

This thing would violate conservation of momentum. Conservation of momentum and energy was at the basis of newtonian mechanics, which worked very well as you probably know. They stayed at the basis for everything from quantum mechanics to thermodynamics. When you study Gauge Theory you realize just how fundamental this all is. Gauge theory is by far the most accurate theory we have ever produced, down to 12 decimal places, twelve! It is the one thing in the universe where we are absolutely sure we're right on the money.

If this drive works as advertised it isn't just some little "Oh hey, this is a funny little trick". It is "OH MY GOD ALL OF SCIENCE IS WRONG!!!!". It would mean we can't trust measurements anymore, can't use any model to describe the universe and our maths becomes meaningless.

It would invalidate 400 years of accurate measurements.... This is not a one in a million shot, it's a one in a googol. I honestly think that invisible fairy farts are a MORE likely explanation than 'pushing onto vacuum energy' or whatever they came up with.

I would bet my life savings without hesitation that the thrust is due to some weird interaction of the magnetic field, or evaporation of the copper plates or some other mundane but minor force generator.

So yea, not to dash everyone's hopes and dreams, but this isn't happening.
 
It occurs to me that the guy who presented the theory may have been incorrectly parroting what he heard from other people on the team.
 
Question, do any of the articles or papers have details on the raw displacements of the torque pendulum used to measure the force? I'd like to do some back of the envelope calculations on a possible source of error I haven't seen mentioned before, but I need to know the actual measured displacements instead of just the forces. I'm probably wrong, so I don't want to start talking about this as a potential source of error before I throw a few numbers at it.

Related question, have they shown the drive can do work (a maintained force over a change in position of the center of mass) or just that it can produce a force? I don't see any mention of it, but I skimmed parts of the article and haven't found the full paper yet (assuming there was a second one). If it can produce work it's a more significant result in my book and invalidates my idea.

Edit: I just realized I'd also need the masses of the assorted pieces and the dimensions of the set up. That makes actually figuring this out pretty impossible without access to the set up itself. Even an order of magnitude approximation would be hard.

The thought was the center of mass of the system (including the torsion pendulum) could be remaining constant while the (heavy looking) EM drive elastically, asymetrically distorted causing the points where the drive connected to the pendulum to shift relative to the center of mass of the system, creating an offset in the pendulum. If they're measuring forces based on the deflection of the pendulum, this would appear as a force even though the center of mass hasn't moved and thus no work has been done.

I hadn't seen anyone bring up internal body forces and work, so I thought I'd throw it out there. I'm probably wrong, but I don't think I can really check, which bugs me.
 
Last edited:
If I had to guess, I'd say that's where you'd get your violation of conservation of momentum: unlike real particles, when virtual particles annihilate, they take all of their energy and momentum with them because no energy or momentum went into their creation
Idiot here: What is a 'virtual particle'? When I hear this term my first thought is a particle that is artificially simulated, but I strongly sense that this is not the case.
 
Idiot here: What is a 'virtual particle'? When I hear this term my first thought is a particle that is artificially simulated, but I strongly sense that this is not the case.

A basic principle of quantum mechanics is called the "Heisenberg Uncertainty Principle".

It basically says that there are certain pairs of properties that you can't both measure to arbitrary precision at the same time. The classical example would be an electron. You can measure the position, but as you narrow down the position it also increases your uncertainty in its velocity. If you narrow down exactly where it is you have no idea how fast it is going. Vica versa, if you know exactly how fast it is going you have no clue where it is.

This isn't because our instruments just suck, it is a inherent property of the universe. Explaining why that is would be a bit too complicated for a forum post, but if you want to know more look for wave packets and go from there.

This same thing holds for all fields; electromagnetic, gravitational etc. You can't know both the rate of change in the field and the actual value of a field. Now look at a perfect vacuum. In a classical vacuum there are no particles, all fields are exactly zero and there is no change ever. But this means that both the value of the field and the rate of change would be 0. Something that can't be the case due to Heisenberg.

So in reality a vacuum isn't really empty, but it is very bumpy and constantly fluctuating because the uncertainty principle does not allow it to be empty. So you have this frothing maelstrom of particles and fields popping into existence and immediately disappearing. These particles (and some other weird 'not really there' particles) are called virtual particles and are what this drive supposedly pushes against.

Virtual particles can do a lot of weird stuff; like having negative energy, going back in time or violating momentum conservation. So they're a prime target for crackpot science (like this drive) to use as explanation. They're the carbon nanotubes of quantum mechanics: they can fix every problem. But it is important to note that virtual particles can only do these things if the universe on the whole obeys all the usual laws. So a virtual particle with negative energy MUST be cancelled out by a positive energy virtual particle in its immediate environment. Same goes for all those other violations and that's why this drive can't work.
 
A lot of popular expositions on virtual particles unintentionally engender some misconceptions, even if they say correct things, because the audience does not always have enough knowledge to interpret it correctly. In this case, I think that the general context is actually more important. To that end, some points:
— Virtual particles are not limited to discussions of the vacuum.
— Virtual particles are not inherently quantum-mechanical, e.g. classical field theory can be interpreted in terms of virtual particles as well.

Fundamentally, a virtual particle is simply an intuitive interpretation of a type of mathematical term that occurs in a particular kind of approximation scheme.

It frequently happens that we can't calculate the predictions of a theory exactly and therefore must resort to some sort of approximation. A general technique to do this when what you're looking for is 'close' to something you do know how to solve exactly is called perturbation theory. Roughly speaking,
{answer to complicated problem} = {answer to simple problem} + {series of correction terms}.​
However, things can get so complicated that it's very difficult to keep track of all the mathematical terms produced by such a procedure. To deal with this book-keeping problem, people draw graphical diagrams.

A Feynman diagram directly corresponds to some complicated mathematical expression in this scheme, when translated by some simple rules. Since this kind of diagram looks a process in spacetime (particles going from here to there, interacting, etc.), many physicists call the internal lines of a Feynman diagram, which represent an interaction, a virtual particle. Some other physicists (e.g., Steven Weinberg) think this interpretation is inappropriate. Regardless, the physical content is the same either way, but it is an intuitive picture.

...

Being aware of the context of the concept of 'virtual particle' allows one to appreciate just how nutty some crackpots can get on the topic.

Virtual particles are internal lines of a Feynman diagram, so they only ever represent interactions with something else. Even if they lead to other internal lines, they eventually terminate on something real, so that case simply represents a more complicated interaction with something that is not virtual. Therefore, one must either push off something else or produce real particles as exhaust, because virtual particles only ever mediate such interactions.

Moreover, it is impossible to 'rules lawyer' or 'bend' conservation laws using virtual particles: since they are are an interpretation in the context perturbation theory, if they break conservation, so does the theory. It can't be otherwise, because virtual particles are just a part of a specific way to talk about the theory. (A more technical reason is that every Lorentz-invariant theory will have Feynman diagrams that exactly conserve energy and momentum at every vertex, individually.)

One could take things like EmDrive more seriously if they were up-front about breaking energy and momentum conservation. It wouldn't be some grave sin. For example, it was a fad in the 70's to make up wonky theories of gravity that may have energy and momentum conservation; people have experimentally tested broad classes of those and still run such analyses on some observations. Physicists also make up theories with Lorentz violations, even with it being the most cherished principle in fundamental physics, and look for experimental evidence for such violations. Despite twaddle about "science orthodoxy" and "accepting new ideas," the reality is that some ideas aren't even wrong.

...

Ok, this rant has gone long enough, but I might as well mention a more theoretical criticism of Sonny's motivation of {ρvac​c²}{4π(cT0​)²} = {c4​/G}.
Sonny's theoretical motivation is that vacuum energy energy integrated over the horizon being the Planck force. The formula is a vague numerological coincidence that's rather far from the precision Sonny ascribes to it, as covered before, but it also rests on a conceptual mish-mash. If T0​ is the age of the universe (approx. Hubble time tH​), then cT0​ is the light-travel-time distance. However, for a flat FRW universe, the Euclidean formula for sphere surface area 4πR² would only be correct in terms of proper distance, which is not even vaguely approximated by LTT distance (neither at emission nor detection).
 
Last edited:
A basic principle of quantum mechanics is called the "Heisenberg Uncertainty Principle".

It basically says that there are certain pairs of properties that you can't both measure to arbitrary precision at the same time. The classical example would be an electron. You can measure the position, but as you narrow down the position it also increases your uncertainty in its velocity. If you narrow down exactly where it is you have no idea how fast it is going. Vica versa, if you know exactly how fast it is going you have no clue where it is.

This isn't because our instruments just suck, it is a inherent property of the universe. Explaining why that is would be a bit too complicated for a forum post, but if you want to know more look for wave packets and go from there.

This same thing holds for all fields; electromagnetic, gravitational etc. You can't know both the rate of change in the field and the actual value of a field. Now look at a perfect vacuum. In a classical vacuum there are no particles, all fields are exactly zero and there is no change ever. But this means that both the value of the field and the rate of change would be 0. Something that can't be the case due to Heisenberg.

So in reality a vacuum isn't really empty, but it is very bumpy and constantly fluctuating because the uncertainty principle does not allow it to be empty. So you have this frothing maelstrom of particles and fields popping into existence and immediately disappearing. These particles (and some other weird 'not really there' particles) are called virtual particles and are what this drive supposedly pushes against.

Virtual particles can do a lot of weird stuff; like having negative energy, going back in time or violating momentum conservation. So they're a prime target for crackpot science (like this drive) to use as explanation. They're the carbon nanotubes of quantum mechanics: they can fix every problem. But it is important to note that virtual particles can only do these things if the universe on the whole obeys all the usual laws. So a virtual particle with negative energy MUST be cancelled out by a positive energy virtual particle in its immediate environment. Same goes for all those other violations and that's why this drive can't work.
A lot of popular expositions on virtual particles unintentionally engender some misconceptions, even if they say correct things, because the audience does not always have enough knowledge to interpret it correctly. In this case, I think that the general context is actually more important. To that end, some points:
— Virtual particles are not limited to discussions of the vacuum.
— Virtual particles are not inherently quantum-mechanical, e.g. classical field theory can be interpreted in terms of virtual particles as well.

Fundamentally, a virtual particle is simply an intuitive interpretation of a type of mathematical term that occurs in a particular kind of approximation scheme.

It frequently happens that we can't calculate the predictions of a theory exactly and therefore must resort to some sort of approximation. A general technique to do this when what you're looking for is 'close' to something you do know how to solve exactly is called perturbation theory. Roughly speaking,
{answer to complicated problem} = {answer to simple problem} + {series of correction terms}.​
However, things can get so complicated that it's very difficult to keep track of all the mathematical terms produced by such a procedure. To deal with this book-keeping problem, people draw graphical diagrams.

A Feynman diagram directly corresponds to some complicated mathematical expression in this scheme, when translated by some simple rules. Since this kind of diagram looks a process in spacetime (particles going from here to there, interacting, etc.), many physicists call the internal lines of a Feynman diagram, which represent an interaction, a virtual particle. Some other physicists (e.g., Steven Weinberg) think this interpretation is inappropriate. Regardless, the physical content is the same either way, but it is an intuitive picture.

...

Being aware of the context of the concept of 'virtual particle' allows one to appreciate just how nutty some crackpots can get on the topic.

Virtual particles are internal lines of a Feynman diagram, so they only ever represent interactions with something else. Even if they lead to other internal lines, they eventually terminate on something real, so that case simply represents a more complicated interaction with something that is not virtual. Therefore, one must either push off something else or produce real particles as exhaust, because virtual particles only ever mediate such interactions.

Moreover, it is impossible to 'rules lawyer' or 'bend' conservation laws using virtual particles: since they are are an interpretation in the context perturbation theory, if they break conservation, so does the theory. It can't be otherwise, because virtual particles are just a part of a specific way to talk about the theory. (A more technical reason is that every Lorentz-invariant theory will have Feynman diagrams that exactly conserve energy and momentum at every vertex, individually.)

One could take things like EmDrive more seriously if they were up-front about breaking energy and momentum conservation. It wouldn't be some grave sin. For example, it was a fad in the 70's to make up wonky theories of gravity that may have energy and momentum conservation; people have experimentally tested broad classes of those and still run such analyses on some observations. Physicists also make up theories with Lorentz violations, even with it being the most cherished principle in fundamental physics, and look for experimental evidence for such violations. Despite twaddle about "science orthodoxy" and "accepting new ideas," the reality is that some ideas aren't even wrong.

...

Ok, this rant has gone long enough, but I might as well mention a more theoretical criticism of Sonny's motivation of {ρvac​c²}{4π(cT0​)²} = {c4​/G}.
Sonny's theoretical motivation is that vacuum energy energy integrated over the horizon being the Planck force. The formula is a vague numerological coincidence that's rather far from the precision Sonny ascribes to it, as covered before, but it also rests on a conceptual mish-mash. If T0​ is the age of the universe (approx. Hubble time tH​), then cT0​ is the light-travel-time distance. However, for a flat FRW universe, the Euclidean formula for sphere surface area 4πR² would only be correct in terms of proper distance, which is not even vaguely approximated by LTT distance (neither at emission nor detection).


...I'll take your word for it.
 
A lot of popular expositions on virtual particles unintentionally engender some misconceptions, even if they say correct things, because the audience does not always have enough knowledge to interpret it correctly. In this case, I think that the general context is actually more important. To that end, some points:
— Virtual particles are not limited to discussions of the vacuum.
— Virtual particles are not inherently quantum-mechanical, e.g. classical field theory can be interpreted in terms of virtual particles as well.

Fundamentally, a virtual particle is simply an intuitive interpretation of a type of mathematical term that occurs in a particular kind of approximation scheme.

It frequently happens that we can't calculate the predictions of a theory exactly and therefore must resort to some sort of approximation. A general technique to do this when what you're looking for is 'close' to something you do know how to solve exactly is called perturbation theory. Roughly speaking,
{answer to complicated problem} = {answer to simple problem} + {series of correction terms}.​
However, things can get so complicated that it's very difficult to keep track of all the mathematical terms produced by such a procedure. To deal with this book-keeping problem, people draw graphical diagrams.

A Feynman diagram directly corresponds to some complicated mathematical expression in this scheme, when translated by some simple rules. Since this kind of diagram looks a process in spacetime (particles going from here to there, interacting, etc.), many physicists call the internal lines of a Feynman diagram, which represent an interaction, a virtual particle. Some other physicists (e.g., Steven Weinberg) think this interpretation is inappropriate. Regardless, the physical content is the same either way, but it is an intuitive picture.

...

Being aware of the context of the concept of 'virtual particle' allows one to appreciate just how nutty some crackpots can get on the topic.

Virtual particles are internal lines of a Feynman diagram, so they only ever represent interactions with something else. Even if they lead to other internal lines, they eventually terminate on something real, so that case simply represents a more complicated interaction with something that is not virtual. Therefore, one must either push off something else or produce real particles as exhaust, because virtual particles only ever mediate such interactions.

Moreover, it is impossible to 'rules lawyer' or 'bend' conservation laws using virtual particles: since they are are an interpretation in the context perturbation theory, if they break conservation, so does the theory. It can't be otherwise, because virtual particles are just a part of a specific way to talk about the theory. (A more technical reason is that every Lorentz-invariant theory will have Feynman diagrams that exactly conserve energy and momentum at every vertex, individually.)

One could take things like EmDrive more seriously if they were up-front about breaking energy and momentum conservation. It wouldn't be some grave sin. For example, it was a fad in the 70's to make up wonky theories of gravity that may have energy and momentum conservation; people have experimentally tested broad classes of those and still run such analyses on some observations. Physicists also make up theories with Lorentz violations, even with it being the most cherished principle in fundamental physics, and look for experimental evidence for such violations. Despite twaddle about "science orthodoxy" and "accepting new ideas," the reality is that some ideas aren't even wrong.

...

Ok, this rant has gone long enough, but I might as well mention a more theoretical criticism of Sonny's motivation of {ρvac​c²}{4π(cT0​)²} = {c4​/G}.
Sonny's theoretical motivation is that vacuum energy energy integrated over the horizon being the Planck force. The formula is a vague numerological coincidence that's rather far from the precision Sonny ascribes to it, as covered before, but it also rests on a conceptual mish-mash. If T0​ is the age of the universe (approx. Hubble time tH​), then cT0​ is the light-travel-time distance. However, for a flat FRW universe, the Euclidean formula for sphere surface area 4πR² would only be correct in terms of proper distance, which is not even vaguely approximated by LTT distance (neither at emission nor detection).
Well... Can you tell me what Levy-Bruhl thought about Nuer religion or Alasdair MacIntyre's conception of rationality? I can.

...

Trade? :(
 
Well... Can you tell me what Levy-Bruhl thought about Nuer religion or Alasdair MacIntyre's conception of rationality? I can.

...

Trade? :(
Sometimes I wish I'd done a degree in a more obscure field, because saying "I can give you a detailed analysis of the development of armoured warfare in a geopolitical context." doesn't actually sounds very impresive on the web :p
 
Sometimes I wish I'd done a degree in a more obscure field, because saying "I can give you a detailed analysis of the development of armoured warfare in a geopolitical context." doesn't actually sounds very impresive on the web :p
Impressive? Perhaps not. Interesting? Most assuredly.
 
Back
Top