Archive for January, 2016

The Tenth Planet

Posted in Cosmos, Prophesy, Warnings on January 25, 2016 by betweentwopines

Mabus then shortly will die, there will come a horrible devastation of people and animals. Then suddenly the vengeance will be seen when the comet runs. For a hundred hands (500 days as each hand has 5 fingers) there is going to be widespread thirst and hunger.”  Nostradamus






Supernova 1054 – Creation of the Crab Nebula

[SN 1054 petroglyph] On July 4, 1054 A.D., Chinese astronomers noted a “guest star” in the constellation Taurus; Simon Mitton lists 5 independent preserved Far-East records of this event (one of 75 authentic guest stars – novae and supernovae, excluding comets – systematically recorded by Chinese astronomers between 532 B.C. and 1064 A.D., according to Simon Mitton). This star became about 4 times brighter than Venus in its brightest light, or about mag -6, and was visible in daylight for 23 days.

Some older sources had speculated that this supernova might have been as bright as the Full Moon (or mag -12). The reason for this assumption was probably the intention to fit its 23-day visibility with older model lightcurves.

It was probably also recorded by Anasazi Indian artists (in present-day Arizona and New Mexico), as findings in Navaho Canyon and White Mesa (both AZ, found 1953-54 by William C. Miller) as well as in the Chaco Canyon National Park (NM) indicate; there’s a review of the research on the Chaco Canyon Anasazi art online, including the full-size version of our photo, which was obtained by Ron Lussier. A similar photo of this possible Supernova Pictograph was obtained by Paul Charbonneau of the High Altitude Observatory.

As Simon Mitton points out in his book (Mitton 1978), evidence for the plausibility of this interpretation arises from the fact that on the morning of July 5, 1054 the crescent moon came remarkably close to the supernova, as seen (only) from Western North America.

In 1990, Ralph Robert Robbins of the University of Texas announced the discovery of additional records in pottery of the Mimbres Indians of New Mexico. The plate probably representing the supernova is e.g. shown on page 68 of Robert Garfinkle’s book Star Hopping. As the author lines out, the art style of this plate was used only before 1100 A.D., and carbon-14 dating indicates that this plate was created between 1050 and 1070 AD, so that very probably the supernova is depicted, as a 23-rayed star.

Strangely enough, it seems that at least almost no records of European or Arab observations of the supernova have survived to modern times.

According to Burnham, the Chinese records were translated by J.J. Duyvendak (1942; also quoted by Mitton): “.. In the 1st year of the period Chih-ho, the 5th moon, the day chi-ch’ou, a guest star appeared approximately several inches south-east of Tien-Kuan [Zeta Tauri]. After more than a year, it gradually became invisible ..” It is this date which is July 4, 1054 AD. Burnham speculates that the term “inches” may indicate that the position was taking on a celestial globe or armillary sphere, and not in the actual sky, which may explain the “wrong” direction, as in the heavens, M1 is situated north-west of Zeta Tauri. Mitton points out that “inches” refer to an angular distance of about 0.1 degrees, and “several” is typically used for a number between 3 and 5, so that the separation mentioned probably corresponds to 0.3 .. 0.5 degrees.

A later reference [in Sung hui-yao by Chang Te-hsiang], according to Burnham and supplemented by Mitton’s quote, states: .. On the day Hsin-Wei [April 17, 1056] the third month in the first year of the Chia-yu reign period [March 19 – April 17, 1056] the Director of the Astronomical Bureau said, `The Guest Star has become invisible, which is an omen of the departure of the guest’. Originally, during the fifth month of the first year of the Chih-ho reign period, it appeared in the morning in the east guarding T’ien-Kuan. It was visible in the day like Venus, with pointed rays in all four directions. The color was reddish-white… It was seen altogether for 23 days [as a daylight object].

These two dates, July 4, 1054, and April 17, 1056, indicate that the “guest star” was visible to the naked eye for 653 days, at least from China. Yang Wei-Te, Chinese court astronomer/astrologer in those days, reports that in its first two months, the star was of yellow color.

The date of the occurrance of the supernova 1054, July 4, has repeatedly been questioned on various occasions, as it might have been that the Chinese observers may have missed the very first days, e.g. due to weather. A Japanese source (the poet Sadiae Fujiwara about 1235 AD) shifts this date to as early as May 29, but on this day the star Zeta Tauri, which is mentioned in this report, is still too close to the sun to be observable.

In 1997, Giovanni Lupoato, in his book SN 1054: Una Supernova sul Medioevo, (see e.g. Steven N. Shore’s article in Mercury Vol. 29, No. 2, March/April 2000, p. 9) mentions perhaps the only preserved possible European record of this event, a somewhat dubious manuscript called Rampona Chronicle, preserved only in a 15th century transcript. He speculates that in the date given, 24 June 1058, there might be a transcript error of “MLIIII” (1054) to “MLVIII” (1058), which might indicate a date of 24 June 1054 for this event.

For a recent review of the research for the exact date of SN 1054, see e.g. the Collins II (1999). Still, the present author thinks that best evidence for the supernova event date is still that given in the Chinese records, July 4, 1054.

The Supernova 1054 was later also assigned the variable star designation CM Tauri, a designation which is sometimes also used for the (optical) Crab pulsar. It is one of few historically observed supernovae in our Milky Way Galaxy.

The remnant of this supernova is the famous Crab Nebula M1.

Source :






Planet X And The Coming Earth Changes


by George Gross and James Grover


This article reviews the extremely disturbing evidence that catastrophic Earth changes may be far more real, and above all,

far more imminent than has been thought.

We summarize the work of the Nobel physicist Alfven in establishing the electromagnetic component of any competent future paradigm of the solar system, and indeed, of cosmology in its broadest sense. This is based upon the crucial importance of plasma physics to astrophysics and cosmology – a fact which, for reasons that are discussed, has simply not been taken into account in the establishment orthodoxy of those sciences. (“click” on photo to enlarge)


Furthermore, this article summarizes James McCanney’s theoretical and practical research which leads to the conclusion that NASA is attempting to distract attention from the very real dangers of imminent Earth changes. The purpose of the article is to allow non-specialists, that is to say, the public at large, to make up their own mind on these dramatic matters.





There is a sufficient quantity of evidence from a number of scientific and historical disciplines – particularly from the application of plasma physics to the study of the solar system, but also from the geological record of planetary catastrophes and magnetic polar shifts – to make it virtually inconceivable that there will not be major Earth changes within a matter of months or, at most, within a year or two.

In this article, the authors present the evidence for this assertion in such a way that people can make up their own minds as to whether they find it convincing or not. One immediate question that is likely to arise upon being confronted with this entire matter is this: given the very dramatic nature of the predicted Earth changes and extremely serious consequences for everyone and for humanity as a whole, how has it come about that so few people are aware of the dangers? The question is addressed in this article, and, in a nutshell, there are two major answers.

First, the evidence comes from a variety of different sources, none of which are very much at the forefront of people’s minds, so that without a major fanfare and wake-up call, the events could easily be upon us like the proverbial thief in the night. Second, the state of solar and cosmic physics is so fragmented and confused that it has been relatively easy for NASA and the official organs of the scientific-military establishment to create a smokescreen of disinformation.


Their position is based on the notion that the truth would only create panic in the already roiled market, and that the statutes of the major scientific and technological institutions specifically forbid anything that would “alarm the public”.


To James McCanney and the Millennium Group for Truth in Science goes the credit for taking upon themselves the responsibility to inform us all of the purely scientific basis – in terms of an understanding of the physics of the solar system supported by the empirical data flow directly from spacecraft experiments – of the urgency and seriousness of the probability of earthshaking changes.


Taken in conjunction with an examination of the geological record of previous planetary catastrophes, the evidence is overwhelming.



James M. McCanney‘s book, Planet X, Comets and Earth Changes
(for more information about this book, “click” HERE) is sub-titled: a scientific treatise on the effects of a new large planet or comet arriving in our solar system, and expected earth weather and earth changes. [1]

In his book McCanney makes a strong case for,

(1) the fact that the new electrodynamic paradigm in astronomy and astrophysics has already been established

(2) that it is being vehemently opposed and denied by the scientific, academic establishment, including and especially NASA.

(3) that there is overwhelming evidence for the reality of and existence of a new large planet or comet arriving in our solar system and that
(4) NASA and a majority of the astronomic and astrophysical establishment, have a vested interest in completely denying all these points

(5) that he has written his book in order to place before the public the truth as he sees it.

(6) that the reason for the urgency is that the new “intruder” into the solar system is going to cause massive earth changes. The question, he states, is not whether the changes will occur, but when.

The authors of this review have written it in the spirit of point (5) above.


That is to say, for them there is only one reason for writing the review. It is to pass on, as best they can, the evidence that McCanney presents, in order that readers may make up their own mind, and draw their own conclusions.





The old paradigm for understanding the origin and functioning of the solar system is overwhelmingly dominated by gravitational mechanics. It dates back to Newton when little was known about electricity. Similarly, the Kant-Laplace hypothesis concerning the nebular original of the solar system knows only gravitation; electro-magnetism plays no part in either the origin or functioning of the system.


The same is true of the next major step in the history of scientific cosmology – Einstein’s 1917 “Cosmological Considerations Concerning the General theory of Relativity”.

In the 1940’s and 1950’s, Velikovsky championed the importance of electromagnetic phenomena in the solar system, both in his books, and in his correspondence and discussions with Einstein, in which Velikovsky insisted that, “celestial mechanics …without taking into account the electromagnetic fields…is in conflict with the facts.”


At about the same time, this theme was taken up by the mathematical physicist Hannes Alfven, in great experimental and theoretical detail as described below.





The nature and importance of “the fourth state of matter”, and specifically of matter in an electrically charged state, has a history that goes back to Crookes in the 19th century. Crookes realized that gasses through which an electrical current has been passed, themselves acquire a charge.


In 1928, Langmuir coined the term ‘plasma’ to describe that mixture of gas, charged ions and electrons. The advent of plasma physics has created a revolution in science which has not yet been fully recognized. Indeed, partly by accident, and partly with cold deliberation, the reality of the plasma revolution in cosmology has been utterly played down.

The groundwork for the new electromagnetic dimension of cosmology, crystallized around a single outstanding figure – Professor Hannes Alfven and his team at the Royal Institute of Technology in Stockholm.

1942 marked the beginning of Alfven’s application of hydrodynamic theory to plasma physics. He created the magnetohydrodynamic equations describing the motion of plasma as a fluid in electromagnetic fields. He drew attention to the fact that,

Waves of electrons and ions are found not only in laboratory plasma but also in the atmospheric and solar plasmas.” [2]

Such waves are now known as Alfven waves, and for this work, Alfven shared the 1970 Nobel Prize in Physics.

In 1946, Landau formulated the equations that describe the interaction between particles and waves in plasma, and about the same time Bohm used the term plasmons to refer to the concerted behavior of electrons in a plasma.

Let us take a brief look at the first two editions of Alfven’s Cosmical (sic) Electrodynamics, published by the Clarendon Press, Oxford, in 1950 and 1962 respectively. The second edition was co-authored by Falthammar, and the English of both editions has an interesting Scandinavian flavor. In 1954, Alfven published On the origin of the Solar System which, in 1975, was expanded as Structure and Evolutionary History of the Solar System, co-authored with Gustaf Arrhenius.

In the Preface (written in 1948) to the first edition of Cosmical Electrodynamics, Alfven spoke softly, but already sounded a big drum:

“Recent discoveries have revealed that electromagnetic phenomena are of greater importance in cosmic physics than used to be supposed. The time now seems to be ripe for an attempt to systematically trace the electromagnetic phenomena in the cosmos…”

In the General Survey, Alfven continued :

“It seems very probable that electromagnetic phenomena will prove to be of great importance in cosmic physics. Electromagnetic phenomena are described by classical electrodynamics which, however, for a deeper understanding, must be combined with atomic physics. This combination is especially important for the phenomena occurring at the passage of current through gaseous conductors which are treated by the complex theory of ‘discharges’ in gas. No definite reason is known why it should not be possible to extrapolate the laboratory results in this field to cosmic physics.” (op.cit. p.1)

What emerges from a comparison of the two editions is the consolidation of the densely mathematical and cosmological arguments of the first, into the virtual certainty of the new paradigm in the second.


As the authors point out in the preface to the second edition:

A The purpose of the first edition…was to draw attention to a field of research in an early state of development…to the fundamental principles of plasma physics and magneto-hydrodynamics…the magnetosphere…interplanetary space, to solar physics and to cosmic radiation. During the 15 years that have elapsed since the first edition was written, the subject has been developed by two of the largest research efforts of our time: thermonuclear research has increased our knowledge of magneto-hydrodynamics and plasmas, and… space research has been devoted to the exploration of the magneto-hydrodynamic conditions around the Earth.

Consequently, the second edition incorporates all the relevant findings from these new fields of research – plasma physics as developed in thermonuclear research etc., along with the early space data into the consolidation of the crucial importance of electrodynamics in astronomy and astrophysics.

Thus Alfven states,

“In cosmic physics, electromagnetic processes have recently attracted a rapidly increasing interest, and it is now generally realized that they are of fundamental importance…In the interior of the Earth there exist electromagnetic processes by which the earth’s general magnetic field is generated. In the ionosphere electric currents change the earth’s magnetic field, especially during magnetic storms, and also produce luminous phenomena, aurorae, in certain regions around the geomagnetic poles…


In the magnetosphere, a complicated and rapidly varying system of currents [were] found by space research measurements. In certain regions (the radiation or Van Allen belts) there is also a flux of high-energy charged particles trapped in the magnetic field.”

(Cosmological Electrodynamics, 2nd edition, p.1)

Furthermore, “The conditions in the ionosphere and the magnetosphere of the earth are influenced by the electromagnetic state in interplanetary space, which in turn is affected by the sun. There are a number of solar electromagnetic phenomena …sunspots, prominences, solar flares, etc. In other stars electromagnetic phenomena are of importance, most conspicuously in the magnetic variable stars.” (ibid., p.1.)

Alfven goes on to point out that it was not “until classical electrodynamics had been combined with hydrodynamics to form magneto-hydrodynamics, which further must be combined with plasma physics in order to allow a deeper understanding of electromagnetic phenomena in cosmic physics.” (ibid., p.2.)

The term ‘plasma’ refers to an ionized gas, an ensemble of neutral molecules, electrons, positive and often also negative ions, together with the energy released from the excited atoms.


Alfven stresses the crucial importance of plasmas for cosmology.

“The properties of plasmas are of paramount interest in cosmic physics because most of the matter in the universe is in the plasma state. In the interior of stars, the gas is almost completely ionized. In the photosphere of the sun (and other stars) the degree of ionization is not very high, but above the photosphere, in the chromosphere and the corona, the ionization is …again almost 100%.


Vast regions of interstellar space, particularly around the hot stars of early spectral type, are highly ionized…In the sun and interplanetary space, probably also in interstellar and intergalactic space, the plasma is penetrated by magnetic fields…As a consequence, the astrophysicist’s interest in plasma physics is mainly concentrated on magnetic plasmas.”

(Alfven, op.cit. p. 134)

In their volume Structure and Evolution of the Solar System , (published in 1975 by Reidel), Alfven and Arrhenius continue to refine their astrophysical model. In their Introduction, they assert that

“Many of the ‘generally accepted’ theories [in this field] lack a valid foundation” (p.xv) One such theory “which cannot stand critical examination is the Laplacian concept of the formation of the sun and the solar system by non-hydromagnetic processes”. (p. xv)

They go on to criticize the fact that whereas,

“[I]n most other fields of cosmic physics it was realized already 25 years ago that electromagnetic processes have a dominating influence on the dynamics of cosmic gas clouds (plasmas), the majority of cosmogonic papers published today are still based on the assumption that such forces can be neglected”

(p. xv)

This is only marginally less true today than when it was stated by Alfven in 1975.

Alfven and Arhenius insist that,

“The processes involved in the formation of celestial bodies in our solar system requires us to use not only the methods of ordinary chemistry and ordinary celestial mechanics, but also those of plasma chemistry and magnetohydrodynamics …generally ignored or incorrectly applied…”

(op.cit., p. 4)

Here is how Falthammar, a colleague of Alfven described the situation in 1988:

“ It was widely believed that cosmic plasma would have negligible resistivity…From that it was  [mistakenly] concluded that the electric field would be a secondary parameter, of little importance…Therefore, electric fields, and especially magnetic-field-aligned electric fields, which we now know to be of crucial importance, were long disregarded. Even today, only a few space missions in the outer magnetosphere have included measurements of electric fields.” [3]

“It is a sobering fact”, adds Falthammar, “that even after hundreds of satellites had circled the earth, the concept of our space environment was still fundamentally wrong in aspects as basic as the existence and role of electric fields…of the near Earth plasma itself.”

Leaving these earlier, but absolutely essential contributions to our understanding of the fundamental electromagnetic component of the solar system, which complement, and certainly do not exclude, the classical gravitation/inertia view of celestial dynamics, let us see how and where McCanney fits into the picture.





McCanney took up the baton in 1979 and the early 1980’s at the Physics and later the Math departments at Cornell. Given the retardation effect with respect to the electromagnetic component, that Alfven has made clear, it is easy to understand, yet utterly lamentable that McCanney was not given tenure at by either department.


The other side of that coin is that McCanney was free to take up the role of “an independent scientist, not subject to the pressures of the scientific community, peer pressure or governmental non-disclosure agreements and funding.” (ibid. p. 32)

Cornell had certain advantages: the Library was part of the Library of Congress network, so if a book was in print, it was available. Even more importantly, it was a repository of data from NASA. As we read in the introduction to his book,

Armed with his existing theoretical work, and this incredible source of information and with the timing that coincided with the daily arrival of new data from the Voyager and other space craft, he [McCanney] was in a totally unique position to do what he has done (loc.cit. p.iii)

In other words, schematically speaking, McCanney took over where Alfven left off. Here is the core of McCanney’s position with respect to the electromagnetic part of the paradigm.





(1)       “Our solar system acts like a large electrical circuit… Our sun forms an electric capacitor (a separation of electrical charge as done by a simple DC battery in a flashlight)

(2)       This solar capacitor has its negative pole at the surface of the sun, and also has a negative pole far out beyond the outer planets in the form of a sparse nebular cloud of dust and gasses.

(3)        “An excess current of protons… continually generates and supports the solar capacitor by way of the ‘solar wind’ (literally a wind of such particles leaving the sun and blowing outwards into space)

Solar Wind

(4)        All stars and galactic nuclei, and even unlit small stars such as our ‘planets’ Jupiter and Saturn are producing …cosmic batteries around themselves.

(5)         This is a natural by-product of the nuclear fusion process (the burning of nuclear fuel such as hydrogen, helium, etc.) in the atmospheres of these celestial objects.” (McCanney, op.cit. p.10)

(6)        “The sun is powered at its atmospheric surface by an electrical fire of hydrogen and helium that we call ‘fusion’ that is constantly ignited by energetic lightning bolts in its turbulent atmosphere. It is the local electric field at the outer surface of the sun (the solar corona of high energy electrons) that hurls the vast solar flares out into the far reaches of the solar system. The positively charged protons are accelerated outwards, while the negatively charged electrons are retarded, thus causing what I have called the excess current of protons in the solar wind. The sun produces far more energy in the form of electrical energy than it does in the form of light energy.” (ibid. p.13)

(7) To give an idea of the stupendous magnitude of solar flares, they “release the force of 10 million volcanic eruptions in a matter of minutes. Furthermore, a single coronal mass ejection (CME) can carry “more than 10 billion tons of hot, electrically charged gas” [i.e. plasma] from the sun’s corona into space, “ a mass equivalent to that of 100,000 battleships” packing a punch “comparable to that of 100,000 hurricanes” and traveling at “between 1-5 million miles an hour.”

(8)        “The power of CME lies in its ability to drive currents in the Earth’s magnetosphere” and “if the magnetic field carried by The CME has a southward orientation (opposite Earth’s northward-flowing magnetic field lines) the magnetosphere gets a major jolt…transferring…millions of amperes of electric current to the magnetosphere.” (Carlowicz and Lopez, op.cit. p.89) This can knock out power lines and electric generators, and disrupt all forms of electronic communication.

(9)         That is as far as the establishment position goes – and Carlowicz & Lopez represent the establishment viewpoint which McCanney denounces, because it deliberately fails to warn against the far greater dangers which the solar storms hold – namely their capacity, when triggered by comets or planets intruding into the solar system, to produce major earth changes such as polar shifts, flash freezing of continents, which certainly occurred in the past. That Carlowicz & Lopez do in fact mislead their readers is evident from their statement, on p.91, that “storms from the sun cannot harm life on the surface of the Earth.”

(10)      “In the summer of 2001, at the recommendation of a panel of space and solar physicists, NASA announced the cancellation of the International Solar-Terrestrial Physics (ISTP) program. The Agency decided that ‘official co-ordination of the international missions was a scientific luxury it could no longer afford.’ NASA withdrew its support for the (solar) Wind mission, and for participation in Japan’s Geotail mission.


Funding for some of the key elements of the ISTP success story – the theory and modeling programs, the data centers, the ground-based observatories – was almost entirely cut off.” [6]


Why? Because the co-ordination of the data from all of those sources would have let the cat out of the bag and made it a lot more difficult to sustain the two illusions that,

(1)   electromagnetism plays a negligible part in solar physics and

(2)   there is no real, imminent threat of major catastrophic Earth changes.





When celestial body such as an asteroid, a comet or a planet enters the solar capacitor,

“it will cause a localized and then more extended electrical discharge of the capacitor. This much like the backyard bug killer or ‘zapper’ that discharges when a bug flies between the high voltage screens.”


What happens next is that,

“the increased electrical activity imparted to the solar atmosphere ignites a higher level of nuclear fusion causing the sun to become excited above its normal levels and may ignite small to very large solar flares. Small comets have many times been observed to directly cause solar flares as they pass near the sun.”

(ibid., p.16)


Planet X being pulled into our solar system

(watch Video – “click” on photo)

“Small comets generally discharge only the small localized regions of the solar capacitor, whereas the very large ones can discharge up to and including the entire capacitor… These are the ones (such as the anticipated Planet-X) which are growing much larger and pose a serious threat to the existing planets including Earth.”


Now here is the crux of the matter:

“The sun is currently at levels never experienced before and it is increasing to record levels of activity every day. We were supposed to have peaked in the year 2000 with the ‘solar maximum’… We are currently over two years beyond that, and there is no end to the increases in sight. It definitely indicates that the sun is currently interacting with a large intrusion into the solar capacitor.”


The intruder is capable of “action at a distance” in the following manner.

“The comet would discharge to the solar surface causing a significant solar flare that could blow a huge wall of high energy protons our way, causing an alteration of [Earth’s] magnetic field structure [which under ordinary circumstances, acts as a protective shield] and cause electrical and cyclonic storms at the surface of the earth.” (pp.21-22)

This in turn could trigger massive Earth changes of the kind that are known to have already occurred in the geological record.




“Earth has been subjected to a close encounter by at least one massive new comet becoming a planet in the time frame of no more than 10,000 years ago. The time frame that clearly makes] sense for the last event is approximately 3,500 years ago, the time we see a dramatic transition between ‘pre-history’ and the digging of ‘modern man’ out of the devastation. There is no longer any other possible scenario and modern science is impeding progress each day that it holds the reins of power and prevents this reality from blossoming forth.” (op. cit., p.100)

It was Velikovsky who fifty years ago pioneered the above hypothesis and McCanney has a special tribute to Immanuel Velikovsky in which McCanney insists on the value of Velikovsky’s insight: says McCanney:

“NASA scientists have been repeating…for 25 years that ‘Velikovsky has been proven wrong.’ It is time to set the record straight. It was NASA that was [and is still] wrong. Velikovsky single-handedly did more for the advancement of true science than all of the NASA scientists in the last 3 decades combined.”

(p. 102)

The proceedings of the Second IEEE International Workshop (published in Astrophysics and Space Science Vol. 227, 1995, and reprinted in book form the same year by Kluwer Academic Publishers as Plasma Astrophysics and Cosmology ed. Peratt, A.L.) fully support the immense importance of electromagnetism in the practical and theoretical study of cosmology.


The most obvious reason for this importance lies in the fact that 99% of the matter in the universe is in the form of plasma –which is composed, precisely, of electromagnetically charged gas, ions and electrons.

McCanney goes on to assert that, on the basis of the electromagnetic theory of solar system and planetary formation, the solar system was not born at a single point in time. It began with the Sun and Jupiter. All the other planets were integrated into the solar system by capture.


They began as comets in elliptical orbit, but as they accreted more and more debris by virtue of their electrical charge and attraction, they slowed down and their orbits settled into circular motion around the sun.

“We clearly know today that the pre-planet comet [that became Venus] was CAPTURED by Jupiter, a process that is very common, and well understood mathematically, and has been observed as every one of the major planets…have associated families of comets that were captured. …Venus is a hot new planet.”


A similar process of capture is happening with Planet X. The perturbations of the solar corona bear witness to its entry into the solar system. The question is not whether it is there or not, the question is when will its presence cause major earth changes.

Planet-X on 01/28/02 and 03/03/03

“The Sun is currently at levels never experienced before, and it is increasing to record levels of activity every day…


It definitely indicates that the sun is currently interacting with a large intrusion into the solar system”


However, McCanney states that the timing of the earth changes cannot be accurately predicted because “even if a large new object were known about today with exact location and orbital information, its orbit will change on a daily basis and the true orbit and location will elude prediction… although rough estimates can be made if an actual candidate is identified.”

(McCanney, op.cit., p. xii)




It is a major contention of McCanney’s that NASA and the academic astronomy and astrophysical establishment have been, and still are, engaged, not only in peddling the antiquated pre-electrical paradigm of celestial functioning, but also that they are engaged in a massive cover-up. He says, therefore, “ It is imperative that the public turn to the truth of what is really going on, and force NASA to release any data regarding new planets or other large objects.”

However, McCanney fears that the opposite is going to happen:

NASA will work harder and harder to put their name, and their incorrect and borrowed information (mixed with their outdated and incorrect theories) in the public eye through news releases, TV and weather specials and newspaper articles.” (McCanney, op.cit. p. xii)

“It is essential that the public recognize them for who they are and what they are. It is also important that the public understand the correct information so they’re not lead down the wrong path by what appear to be well-educated scientists who stand behind their Ph.D.’s and government funding”

“It is safe to say that the NASA scientists are in complete denial. They hide behind a news release system in which no one can ask them questions, they hide behind their own referee system in which they referee their own articles, or, as in the case of automatic publication in SCIENCE or NATURE journals, articles are not refereed at all.”


What really is at stake, according to McCanney, is that “for every day that NASA sits back, says nothing and collects an ill-gotten paycheck, it is…another day that the human civilization on this planet goes unprepared for a critical time of survival as a species.”


McCanney brings before the bar of public opinion the following statements:

NASA was caught lying and producing doctored photos to prove that the comet Hale-Bopp did not have a companion…NASA began blatantly hiding data from many space investigating facilities including the Hubble Space Telescope, the SOHO solar observatory…NASA developed public relations offices whose …function was to…fend off problematic people like myself with planned disinformation campaigns.”  

(McCanney, p.44)

“The NASA news release system is strict and comes only from designated NASA news points in Goddard Space Center, and Jet Propulsion Labs. Individual scientists are under strict non-disclosure agreements…These scientists …cannot discuss or admit publicly any event that might cause ‘public alarm’. That is why the data regarding a new arrival like Planet X will not be allowed from the halls of NASA. They are under strict contract NOT to tell the public.”

(McCanney, op.cit. p.49)

“ [It] is clear…that NASA has observed such objects [as Planet X] and is hiding the data from the public.” (p.32) Similarly: “There is clear evidence that NASA is now hiding data that would prove that there is another massive object inbound into the solar system with potential for devastating effects on planet Earth.”

(p. 101)


The fallacy of materialism is its inability to comprehend the primacy of consciousness and of light.


The critical importance of electricity and magnetism in cosmology can be approached from another angle. Electromagnetic force is the cosmic medium that actually links human, planetary, solar, and galactic systems. The universe is not a mechanical contraption of inert matter; it is a vital organic, conscious whole, and electromagnetism is the medium by which information is exchanged between the various members of that totality.


The bio-electric “subtle bodies” of humanity, the core of planet earth, its electromagnetic fields, the heliocosm (or solar field), and the galactic center (the Great Central Sun or the “Sun behind the Sun”) are linked by a two-way, top-to-bottom and bottom-to-top cosmic walkie-talkie, electromagnetic information system. This is what gives the cosmos its unity.


Animated by conscious beings, there is a cosmic code that harmonizes and synchronizes the various different vibratory frequencies that characterize each of the planetary, solar, galactic and nebular sub-systems. That code is electromagnetic; its essence is light. Light, not matter is primary.


Consciousness is not material, though it may be embodied. Light and consciousness are similar, and primary.


Matter is secondary; consciousness is electromagnetic and is primary. This integral, holistic understanding of the universe is a far cry from the crude, one-dimensional, dangerous, dead and deadly paradigm of scientific materialism to which the military-industrial-scientific complex still clings, and of which NASA and its space programs is an essential part.

The bio-electric “subtle bodies” of humanity, the core of planet earth, its electromagnetic fields, the heliocosm (or solar field), and the galactic center (the Great Central Sun or the “Sun behind the Sun”) are linked by a two-way, top-to-bottom the beginning was Light. This is not the antiquated claim of an anthropomorphic religion; it is the basic realization of the new spiritual, scientific, holographic and electromagnetic paradigm of the universe.

Of this paradigm, Arguelles
[7] points out that it “brings into focus a world of coherence and unity, a resonant matrix… of information transmission.”  Underlying this cosmic loom is “the principle of harmonic resonance” (op.cit., p.54).  


Resonance has to do with the vibratory frequencies of any system that produces waves – sound waves or electromagnetic waves, any kind of waves. Harmonic resonance means the exchange of vibration, harmonizing or synchronizing, and sometimes dissonant, between of two or more vibrating systems, whether these are tuning forks, violin strings, pendulums, or planets.

It was the Maya, in South America, who pioneered this understanding of the order underlying the cosmos, in which mind or consciousness is primary and in which “there is nothing without feeling” and “the field of reality is saturated with purpose.” (ibid. p.56)


The purpose of the continuous exchange of information between the galactic center and the other members of the galaxy “is the superior coordination of the member organisms, the star systems.”


This entails the capacity of ‘local intelligence’ i.e. the planetary mind or field of consciousness being able to “perceive the whole and align itself accordingly” and to extend this process of alignment to other member systems.

“The galactic game is superior intelligent harmonization” through which “the local intelligence is taught or shown how it works, in such a manner that it comes to its own conclusions” without coercion.

(ibid., pp.56-7)

What do we know of the solar or planetary cycles?

“The solar system is a self-contained organism whose subtle sheath or morphic field is called the heliocosm. Every 11.3 years the heliocosm pulses outward and then for another 11.3 years it pulses inward. These 11.3 inhalation-exhalation cycles are referred to as the heliopause, whose total movement thus occurs over a period of some 23 years…”

(ibid., p.118)

In parallel with this heliopause cycle there is a “binary sunspot” movement in which,

“two ‘spots’ –one negative, the other positive, pulse inward from positions 30 degrees north and south of the solar equator. Approximately every 11.3 years the two spots meet at the equator, reverse polarity, and begin the process again at 30 degrees north and south of the solar equator…The sunspots’ activity causes great disturbance of the Earth’s radio waves and the bio-electromagnetic field in general.”


“The Sun is the central coordinating intelligence in the solar field. The planets represent harmonic gyroscopes whose purpose it is to maintain the resonant frequency represented by the orbit which the planet holds.”


“Though the heliocosm, the total solar body, is a self-regulatory system, it is at the same time a subsystem within the larger galactic field. Thus its inhalation consists of [receiving] cosmic forces – galactic frequencies – monitored either directly from the galactic core and/or via other intelligent star systems. Its exhalation [or transmission] represents transmuted streams of energy/information returned back to the galactic core [known to the Maya as Hunab Ku, and to contemporary investigation as ‘the Great Central Sun’ or ‘the Sun behind the Sun’].”

(ibid., p.119)

How does the planet Earth receive these electromagnetic impulses?

“The resonance of the earth functions like the oscillations of a giant electromagnetic battery. The key features of this battery are the two shells of the ionosphere, the lower lunar and the upper solar shell, respectively 60 and 70 miles above the terrestrial floor of the electromagnetic ocean. It is the currents of the ionosphere in direct resonance with the solar and lunar fields that moderate the wind and atmospheric conditions of the lowest layers of the electromagnetic ocean.


Oscillating at approximately 7.8 cycles per second, the ionosphere is in resonance with the human brain, which – when oscillating at 7.8 cycles per second – reflects a condition of ‘samadhi ’, or [profound] meditational absorption. This common neural-ionospheric frequency is a prime key [to the full development of the Light Body of the Earth and of individual human beings].”  [op.cit. p.186]

“Far beyond the ionosphere lie the next two components of the Earth’s electromagnetic battery, the radiation belts – the lower, positively charged, the upper, negatively charged electron solar belt. It is these belts, like cellular membranes, that mediate the larger electromagnetic currents connecting the Earth to the Sun and to the other systems of the galactic hub…” [ibid. p.186]

The potential for the harmonizing of the galactic, solar, and planetary vibratory rates with the human is evident in the common neural and ionospheric frequency of 7.8 cycles. This is not only the frequency of enlightenment, but it also augurs the possibility of a further transformation – the generating of the Light Body, both of individual human beings, of humanity as a whole, and of the planet.

But that is another whole chapter in the story. Here, I have been concerned to show why the electromagnetic paradigm is of such crucial importance to the evolution of humanity.


It follows that the forces who wish to continue to exploit humanity by reducing everything to materiality, also wish to stunt that evolution, oppose the paradigm, and do all they can to pretend it doesn’t exist, and that Planet X doesn’t exist, and that the Earth changes that it may cause are pure fiction.




Here we shall be following the evidence assembled and presented by Hazelwood in his account of the entire Planet-X scenario. [8]


“click” on photo to watch video

In January 1981, several daily newspapers carried a report that Dr. Richard Harrington and his colleague, Dr. Thomas Van Flandern, of the US Naval Observatory, had told a meeting of the American Astronomical Society that “irregularities in the orbit of Pluto indicated that the solar system contained a 10th planet.”


A similar story was run by the New York Times on June 19, 1982.

“One year later, in 1983, the newly launched IRAS (Infrared Astronomical Satellite) quickly found Planet-X: “A heavenly body as large as Jupiter and part of this solar system has been found in the direction of the constellation of Orion by an orbiting telescope” –wrote the Washington Post. “In August 1988, a report by Harrington calculated that its mass is probably 4 times that of Earth.”

In 1992, Harrington and Van Flandern published their findings and their conclusion that there is an ‘intruder’ planet. The search was narrowed down to the southern skies, below the ecliptic.

Planet-X is likely to be a brown dwarf whose orbit takes it back and forth between our Sun and our Sun’s twin. Planet-X has an orbit which takes 3660 years.

“When a planet’s orbit is between two suns, instead of round one, that orbit is a bit peculiar. It spends 99.99% of its time slowly going away from one of the suns…to reach the half-way point.


Then…the gravity of the sun it’s approaching takes over, and in a relative flash it travels the other half of its journey. Planet-X reached the halfway point sometime in 2000. It only takes about 3 years to travel the rest of the distance…It’ll be cooking with its greatest speed by the time it passes. Once it crosses Pluto’s orbit, it will only take 90 days to pass right between Earth and the Sun.”

The most likely date for its passage and effect on planet Earth is springtime 2003. In 1982, even NASA, momentarily officially recognized the existence of Planet-X:

“An object is really there beyond the outermost planets.”

Today, NASA is trying to sweep the danger under the carpet, claiming that it is nothing more than a large asteroid. But, whereas until early 2001, only the largest telescopes could have seen Planet-X, now even small to mid-level observatories are sighting it.


On the 4th of April 2001, the Lowell Observatory reported its magnitude and co-ordinates.





“The last ice-age pole shift is… proof that we had an encounter with a large celestial body at that time”  (McCanney, op.cit. p.93)

The old ‘true’ north pole had been just north of the state of Wisconsin. Russian Siberia had a tropical climate with rain forests, and mastodons. In a single night of total destruction, those areas were blown apart by incredible winds and storms, and the pole shifted an estimated 30 to 40 degrees.


Overnight the mastodon herds were flash frozen and buried in what is now the Siberian tundra, with the fresh tropical plants still in their throats. The meat was so well preserved that it was used as food (and even sold on the commercial markets) during the building of the trans-Siberian railway. Even a single day of exposure to a warm temperature would have rendered this meat rancid and useless.


The Earth Change was so rapid and complete that this did not happen. The meat was as fresh as the day it froze, thousands of years earlier” About 3600 years ago, which is precisely the period of the most recent visit of Planet-X before the present time.

In conclusion, the authors of the present paper will be satisfied if it helps draw wider attention to McCanney’s work in such a way that readers will want to make up their own mind regarding the significance and dramatic implications of McCanney’s book.
(for more information about this book, “click” HERE)

[1] James M. McCanney  Planet X, Comets and Earth Changes ( JMcCanneyScience Press) Available through his website 
[2] Eliezer & Eliezer (20002) The Fourth State of Matter : an Introduction to Plasma Science ( IOP Publishing,) 2nd Edition, p.160
[3] Falthammar, C-G.(1988), Astrophysical Significance of observations and Experiments in the earth’s Magnetosphere” Dept. of Plasma Physics, Royal Inst. Of Technology, Stockholm, Sweden. (p.5)
[4] ibid., p.6
[5] Carlowiicz and Lopez (2002) Storms from the Sun (National Academy Press) p.13
[6] ibid. p.189
[7] Arguelles, J. (1987) The Mayan Factor: Path beyond Technology (Bear & Co, N.M.)
[8] Hazelwood, M.,(2001) Blindsided: Planet-X Passes in 2003. Earthchanges.
(ISBN 1-931743-40-1. First Publish.)

Source :

Related Articles/Videos :

Planet X – The Current Status pdf  by P.K Seidellmann and R. S. Harrington   U. S. Naval Observatory

Location of Planet X


Planet X Incoming and the Mysterious Death of Dr. Robert Harrington

Nibiru Planet X Arriving 2018 Says Ex NASA Worker

Nibiru 2015 Planet X Secret Underground Cities










Before The Maya: The Olmecs, Quetzalcoatl and the Megalithic Origins of Mesoamerica

Posted in Ancient History, Archaeology, ATS Thread, History on January 22, 2016 by betweentwopines

Colossal Olmecs

Published on Dec 23, 2009 City College of San Francisco’s Latin American Studies Department, Concert & Lecture Series, and The Consulate of Mexico present Colossal Olmecs. The speaker is Dr. Sara Ladron, Director of the Museum of Anthropology; Xalapa, Vera Cruz, Mexico.



Published on Jul 5, 2015
Filmed at the Megalithomania Conference in Glastonbury on 9th May 2010, Hugh has travelled around Mexico, Guatemala, Hondurus and Belize in search of the pre-Mayan megalithic civilization that flourished as far back as 7000BC. Mexico is famous for its Mayan and Aztec architecture but Hugh has discovered evidence of much earlier cultures, that were of “megalithic” origin and were the inspiration behind the Mayan emergence. The Mayans have been credited with introducing a sophisticated calendar, agricultural practices and incredible stonework, but it is now thought that the ‘Olmec’ invented the Long Count calendar that ended in 2012 and taught the Maya much of what they knew, and were experts in the use of Toad ‘___’, psychedelic mushrooms and altered states, suggesting they received their knowledge from the shamanic realm and shared it with other cultures. Hugh also explores the legend of Quetzalcoatl, a bearded god (who is carved in stone at several Olmec sites) and gives an overview of the incredible Olmec civilization, who were thought to be African in origin and were part of a prehistoric cosmopolitan culture that travelled the world.

Hugh Newman is an earth mysteries and esoteric science researcher. He organises the Megalithomania Conferences. His most recent book, ‘Earth Grids – The Secret Patterns of Gaia’s Sacred Sites’, has been published by Wooden Books. He has also appeared on Ancient Aliens (History) and Search for the Lost Giants (History).

Link to ATS Thread :

Astronomers say a Neptune-sized planet lurks beyond Pluto

Posted in Cosmos, Science on January 20, 2016 by betweentwopines

The solar system appears to have a new ninth planet. Today, two scientists announced evidence that a body nearly the size of Neptune—but as yet unseen—orbits the sun every 15,000 years. During the solar system’s infancy 4.5 billion years ago, they say, the giant planet was knocked out of the planet-forming region near the sun. Slowed down by gas, the planet settled into a distant elliptical orbit, where it still lurks today.

The claim is the strongest yet in the centuries-long search for a “Planet X” beyond Neptune. The quest has been plagued by far-fetched claims and even outright quackery. But the new evidence comes from a pair of respected planetary scientists, Konstantin Batygin and Mike Brown of the California Institute of Technology (Caltech) in Pasadena, who prepared for the inevitable skepticism with detailed analyses of the orbits of other distant objects and months of computer simulations. “If you say, ‘We have evidence for Planet X,’ almost any astronomer will say, ‘This again? These guys are clearly crazy.’ I would, too,” Brown says. “Why is this different? This is different because this time we’re right.”

Mike Brown (left) and Konstantin Batygin.

Outside scientists say their calculations stack up and express a mixture of caution and excitement about the result. “I could not imagine a bigger deal if—and of course that’s a boldface ‘if’—if it turns out to be right,” says Gregory Laughlin, a planetary scientist at the University of California (UC), Santa Cruz. “What’s thrilling about it is [the planet] is detectable.”


Mike Brown (left) and Konstantin Batygin.

Batygin and Brown inferred its presence from the peculiar clustering of six previously known objects that orbit beyond Neptune. They say there’s only a 0.007% chance, or about one in 15,000, that the clustering could be a coincidence. Instead, they say, a planet with the mass of 10 Earths has shepherded the six objects into their strange elliptical orbits, tilted out of the plane of the solar system.

The orbit of the inferred planet is similarly tilted, as well as stretched to distances that will explode previous conceptions of the solar system. Its closest approach to the sun is seven times farther than Neptune, or 200 astronomical units (AUs). (An AU is the distance between Earth and the sun, about 150 million kilometers.) And Planet X could roam as far as 600 to 1200 AU, well beyond the Kuiper belt, the region of small icy worlds that begins at Neptune’s edge about 30 AU.

If Planet X is out there, Brown and Batygin say, astronomers ought to find more objects in telltale orbits, shaped by the pull of the hidden giant. But Brown knows that no one will really believe in the discovery until Planet X itself appears within a telescope viewfinder. “Until there’s a direct detection, it’s a hypothesis—even a potentially very good hypothesis,” he says. The team has time on the one large telescope in Hawaii that is suited for the search, and they hope other astronomers will join in the hunt.

Killing Pluto was fun, but this is head and shoulders above everything else.

Mike Brown, Caltech

Batygin and Brown published the result today in The Astronomical Journal. Alessandro Morbidelli, a planetary dynamicist at the Nice Observatory in France, performed the peer review for the paper. In a statement, he says Batygin and Brown made a “very solid argument” and that he is “quite convinced by the existence of a distant planet.”

Championing a new ninth planet is an ironic role for Brown; he is better known as a planet slayer. His 2005 discovery of Eris, a remote icy world nearly the same size as Pluto, revealed that what was seen as the outermost planet was just one of many worlds in the Kuiper belt. Astronomers promptly reclassified Pluto as a dwarf planet—a saga Brown recounted in his book How I Killed Pluto.

Now, he has joined the centuries-old search for new planets. His method—inferring the existence of Planet X from its ghostly gravitational effects—has a respectable track record. In 1846, for example, the French mathematician Urbain Le Verrier predicted the existence of a giant planet from irregularities in the orbit of Uranus. Astronomers at the Berlin Observatory found the new planet, Neptune, where it was supposed to be, sparking a media sensation.

Remaining hiccups in Uranus’s orbit led scientists to think that there might yet be one more planet, and in 1906 Percival Lowell, a wealthy tycoon, began the search for what he called “Planet X” at his new observatory in Flagstaff, Arizona. In 1930, Pluto turned up—but it was far too small to tug meaningfully on Uranus. More than half a century later, new calculations based on measurements by the Voyager spacecraft revealed that the orbits of Uranus and Neptune were just fine on their own: No Planet X was needed.

Yet the allure of Planet X persisted. In the 1980s, for example, researchers proposed that an unseen brown dwarf star could cause periodic extinctions on Earth by triggering fusillades of comets. In the 1990s, scientists invoked a Jupiter-sized planet at the solar system’s edge to explain the origin of certain oddball comets. Just last month, researchers claimed to have detected the faint microwave glow of an outsized rocky planet some 300 AU away, using an array of telescope dishes in Chile called the Atacama Large Millimeter Array (ALMA). (Brown was one of many skeptics, noting that ALMA’s narrow field of view made the chances of finding such an object vanishingly slim.)

Brown got his first inkling of his current quarry in 2003, when he led a team that found Sedna, an object a bit smaller than both Eris and Pluto. Sedna’s odd, far-flung orbit made it the most distant known object in the solar system at the time. Its perihelion, or closest point to the sun, lay at 76 AU, beyond the Kuiper belt and far outside the influence of Neptune’s gravity. The implication was clear: Something massive, well beyond Neptune, must have pulled Sedna into its distant orbit.



That something didn’t have to be a planet. Sedna’s gravitational nudge could have come from a passing star, or from one of the many other stellar nurseries that surrounded the nascent sun at the time of the solar system’s formation.

Since then, a handful of other icy objects have turned up in similar orbits. By combining Sedna with five other weirdos, Brown says he has ruled out stars as the unseen influence: Only a planet could explain such strange orbits. Of his three major discoveries—Eris, Sedna, and now, potentially, Planet X—Brown says the last is the most sensational. “Killing Pluto was fun. Finding Sedna was scientifically interesting,” he says. “But this one, this is head and shoulders above everything else.”

Brown and Batygin were nearly beaten to the punch. For years, Sedna was a lone clue to a perturbation from beyond Neptune. Then, in 2014, Scott Sheppard and Chad Trujillo (a former graduate student of Brown’s) published a paper describing the discovery of VP113, another object that never comes close to the sun. Sheppard, of the Carnegie Institution for Science in Washington, D.C., and Trujillo, of the Gemini Observatory in Hawaii, were well aware of the implications. They began to examine the orbits of the two objects along with 10 other oddballs. They noticed that, at perihelion, all came very near the plane of solar system in which Earth orbits, called the ecliptic. In a paper, Sheppard and Trujillo pointed out the peculiar clumping and raised the possibility that a distant large planet had herded the objects near the ecliptic. But they didn’t press the result any further.

Later that year, at Caltech, Batygin and Brown began discussing the results. Plotting the orbits of the distant objects, Batygin says, they realized that the pattern that Sheppard and Trujillo had noticed “was only half of the story.” Not only were the objects near the ecliptic at perihelia, but their perihelia were physically clustered in space (see diagram, above).

For the next year, the duo secretly discussed the pattern and what it meant. It was an easy relationship, and their skills complemented each other. Batygin, a 29-year-old whiz kid computer modeler, went to college at UC Santa Cruz for the beach and the chance to play in a rock band. But he made his mark there by modeling the fate of the solar system over billions of years, showing that, in rare cases, it was unstable: Mercury may plunge into the sun or collide with Venus. “It was an amazing accomplishment for an undergraduate,” says Laughlin, who worked with him at the time.

Brown, 50, is the observational astronomer, with a flair for dramatic discoveries and the confidence to match. He wears shorts and sandals to work, puts his feet up on his desk, and has a breeziness that masks intensity and ambition. He has a program all set to sift for Planet X in data from a major telescope the moment they become publicly available later this year.

Their offices are a few doors down from each other. “My couch is nicer, so we tend to talk more in my office,” Batygin says. “We tend to look more at data in Mike’s.” They even became exercise buddies, and discussed their ideas while waiting to get in the water at a Los Angeles, California, triathlon in the spring of 2015.

First, they winnowed the dozen objects studied by Sheppard and Trujillo to the six most distant—discovered by six different surveys on six different telescopes. That made it less likely that the clumping might be due to an observation bias such as pointing a telescope at a particular part of the sky.

Batygin began seeding his solar system models with Planet X’s of various sizes and orbits, to see which version best explained the objects’ paths. Some of the computer runs took months. A favored size for Planet X emerged—between five and 15 Earth masses—as well as a preferred orbit: antialigned in space from the six small objects, so that its perihelion is in the same direction as the six objects’ aphelion, or farthest point from the sun. The orbits of the six cross that of Planet X, but not when the big bully is nearby and could disrupt them. The final epiphany came 2 months ago, when Batygin’s simulations showed that Planet X should also sculpt the orbits of objects that swoop into the solar system from above and below, nearly orthogonal to the ecliptic. “It sparked this memory,” Brown says. “I had seen these objects before.” It turns out that, since 2002, five of these highly inclined Kuiper belt objects have been discovered, and their origins are largely unexplained. “Not only are they there, but they are in exactly the places we predicted,” Brown says. “That is when I realized that this is not just an interesting and good idea—this is actually real.”

Sheppard, who with Trujillo had also suspected an unseen planet, says Batygin and Brown “took our result to the next level. …They got deep into the dynamics, something that Chad and I aren’t really good with. That’s why I think this is exciting.”

Others, like planetary scientist Dave Jewitt, who discovered the Kuiper belt, are more cautious. The 0.007% chance that the clustering of the six objects is coincidental gives the planet claim a statistical significance of 3.8 sigma—beyond the 3-sigma threshold typically required to be taken seriously, but short of the 5 sigma that is sometimes used in fields like particle physics. That worries Jewitt, who has seen plenty of 3-sigma results disappear before. By reducing the dozen objects examined by Sheppard and Trujillo to six for their analysis, Batygin and Brown weakened their claim, he says. “I worry that the finding of a single new object that is not in the group would destroy the whole edifice,” says Jewitt, who is at UC Los Angeles. “It’s a game of sticks with only six sticks.”

Planet X_FinalTimeline


At first blush, another potential problem comes from NASA’s Widefield Infrared Survey Explorer (WISE), a satellite that completed an all-sky survey looking for the heat of brown dwarfs—or giant planets. It ruled out the existence of a Saturn-or-larger planet as far out as 10,000 AU, according to a 2013 study by Kevin Luhman, an astronomer at Pennsylvania State University, University Park. But Luhman notes that if Planet X is Neptune-sized or smaller, as Batygin and Brown say, WISE would have missed it. He says there is a slim chance of detection in another WISE data set at longer wavelengths—sensitive to cooler radiation—which was collected for 20% of the sky. Luhman is now analyzing those data.

Even if Batygin and Brown can convince other astronomers that Planet X exists, they face another challenge: explaining how it ended up so far from the sun. At such distances, the protoplanetary disk of dust and gas was likely to have been too thin to fuel planet growth. And even if Planet X did get a foothold as a planetesimal, it would have moved too slowly in its vast, lazy orbit to hoover up enough material to become a giant.

Instead, Batygin and Brown propose that Planet X formed much closer to the sun, alongside Jupiter, Saturn, Uranus, and Neptune. Computer models have shown that the early solar system was a tumultuous billiards table, with dozens or even hundreds of planetary building blocks the size of Earth bouncing around. Another embryonic giant planet could easily have formed there, only to be booted outward by a gravitational kick from another gas giant.

It’s harder to explain why Planet X didn’t either loop back around to where it started or leave the solar system entirely. But Batygin says that residual gas in the protoplanetary disk might have exerted enough drag to slow the planet just enough for it to settle into a distant orbit and remain in the solar system. That could have happened if the ejection took place when the solar system was between 3 million and 10 million years old, he says, before all the gas in the disk was lost into space.

Hal Levison, a planetary dynamicist at the Southwest Research Institute in Boulder, Colorado, agrees that something has to be creating the orbital alignment Batygin and Brown have detected. But he says the origin story they have developed for Planet X and their special pleading for a gas-slowed ejection add up to “a low-probability event.” Other researchers are more positive. The proposed scenario is plausible, Laughlin says. “Usually things like this are wrong, but I’m really excited about this one,” he says. “It’s better than a coin flip.”

All this means that Planet X will remain in limbo until it is actually found.

Astronomers have some good ideas about where to look, but spotting the new planet won’t be easy. Because objects in highly elliptical orbits move fastest when they are close to the sun, Planet X spends very little time at 200 AU. And if it were there right now, Brown says, it would be so bright that astronomers probably would have already spotted it.

Instead, Planet X is likely to spend most of its time near aphelion, slowly trotting along at distances between 600 and 1200 AU. Most telescopes capable of seeing a dim object at such distances, such as the Hubble Space Telescope or the 10-meter Keck telescopes in Hawaii, have extremely tiny fields of view. It would be like looking for a needle in a haystack by peering through a drinking straw.

One telescope can help: Subaru, an 8-meter telescope in Hawaii that is owned by Japan. It has enough light-gathering area to detect such a faint object, coupled with a huge field of view—75 times larger than that of a Keck telescope. That allows astronomers to scan large swaths of the sky each night. Batygin and Brown are using Subaru to look for Planet X—and they are coordinating their efforts with their erstwhile competitors, Sheppard and Trujillo, who have also joined the hunt with Subaru. Brown says it will take about 5 years for the two teams to search most of the area where Planet X could be lurking.

If the search pans out, what should the new member of the sun’s family be called? Brown says it’s too early to worry about that and scrupulously avoids offering up suggestions. For now, he and Batygin are calling it Planet Nine (and, for the past year, informally, Planet Phattie—1990s slang for “cool”). Brown notes that neither Uranus nor Neptune—the two planets discovered in modern times—ended up being named by their discoverers, and he thinks that that’s probably a good thing. It’s bigger than any one person, he says: “It’s kind of like finding a new continent on Earth.”

He is sure, however, that Planet X—unlike Pluto—deserves to be called a planet. Something the size of Neptune in the solar system? Don’t even ask. “No one would argue this one, not even me.”

Source :

Gasoline From Sand!

Posted in Energy, Science, Technology on January 20, 2016 by betweentwopines


Could the discovery of Silanes  spell the end of fossil fuel?

Peter Plichta’s book “Benzin aus Sand” (Gasoline from Sand), first published in 2001, advocates a change in energy strategy away from burning hydrocarbons to using the energy potential of silanes or, as I would term them, hydrosilicates.
Nitrogen oxidizes silicon

Silicon is the most abundant element in the earth’s crust. Combined with hydrogen, silicon forms what in chemistry are known as “silanes”. Given sufficient heat, silanes react with the nitrogen in the air. This is a new discovery. Nitrogen was thought to be inert, as far as combustion is concerned. So we obviously must re-think the possibilities of combustion. Silicon makes up 25% of the earth’s crust, while nitrogen makes up 80% of air. A process that uses silicon/nitrogen combustion in addition to the known carbon/oxygen cycle, presages some mind boggling new possibilities.

While carbon is also a relatively abundant element, its prevalence is way lower than that of silicon. The relation is about a hundred to one. In addition, most of the available carbon is bound up in carbonaceous minerals such as marble and other carbon-based rocks and some of it is in the atmosphere as carbon dioxide. Those forms are not available for use in the combustion cycle. Only one in about a hundred thousand carbon molecules is bound to hydrogen, making it available for the purpose of combustion. So while carbon has served us well for the first century and a half of industrialization, it is a rather limited fuel.
Using 100% of air for combustion

Plichta’s idea was to exchange chains of carbon atoms in hydrocarbons for chains of silicon in hydrosilicons or silanes. The long chained “higher silanes” are those with five or more silicon atoms in each molecule. They are of oily consistency and they give off their energy in a very fast, highly energetic combustion.

While hydrocarbon-based gasoline only uses oxygen, which makes up 20% of air, for their combustion, the hydrosilicon-based silanes also use nitrogen, which makes up the other 80% of air, when they burn. Silanes with chains of seven or more atoms of silicon per molecule are stable and can be pumped and stored very much like gasoline and other carbon-based liquid fuels.

The efficiency of combustion depends on the amount of heat that is created. Expanding gases drive pistons or turbines. When hydrocarbons are burned with air as the oxidant, efficiency of combustion is limited by the fact that the 20% of air that partakes in the combustion also has to heat up the nitrogen gas, which isn’t participating but has to be expanded as well. When burning silanes, practically all of the air participates directly in the combustion cycle, making for a much more efficient expansion of all the gases involved.
Burning silanes

The combustion process of hydrosilicons is fundamentally different from the exclusively oxygen based combustion we know from burning hydrocarbons. In a sufficiently hot reaction chamber, silanes separate into atoms of hydrogen and silicon, which immediately mix with the oxygen and nitrogen of the air. The hydrogen from the silanes and the air’s oxygen now burn completely leaving only water vapor, bringing the temperature of the gases close to 2000 degrees C.

Since there is no more oxygen, no silicon oxide can be formed in the following phase. What happens instead is an extremely energetic reaction of the 80% nitrogen in the air with the silicon atoms present, that forms a fine powder called silicon nitride (Si3N4).

For those more technically inclined, taking the example of hexasilane (Si6H14), here is what the reaction would look like:

• 2 Si6H14 + 7 O2 + 8 N2 -> 4 Si3N4 + 14 H2O

After this first reaction, a great deal of unreacted nitrogen is still in the combustion gases, which would now react in a stochiometric combustion as follows:

• 4 1/2 Si6H14 + 18 N2 -> 9 Si3N4 + 63 H

Overall, on the input side of the equation we would have:

• 6 1/2 Si6 H14 + 7 O2 + 26 N2

and on the output side, we get:

• 14 H2O + 13 Si3N4 + 63 H

The silicon nitride we find in the “exhaust” is the only known noble gas that exists in solid form, an original discovery by Peter Plichta. That white powdery stuff is a rather valuable raw material for ceramics.

Wikipedia says that silicon nitride powder will form

“… a hard ceramic having high strength over a broad temperature range, moderate thermal conductivity, low coefficient of thermal expansion, moderately high elastic modulus, and unusually high fracture toughness for a ceramic. This combination of properties leads to excellent thermal shock resistance, ability to withstand high structural loads to high temperature, and superior wear resistance. Silicon nitride is mostly used in high-endurance and high-temperature applications, such as gas turbines, car engine parts, bearings and metal working and cutting tools. Silicon nitride bearings are used in the main engines of the NASA’s Space shuttles.”

Rocket fuel for space propulsion

One of the first uses Peter Plichta envisioned for these long-chain hydrosilicons he discovered was to be a fuel for rockets. Space travel today is hindered by the immense weight of fuel a rocket has to carry to lift itself plus the fuel, plus its payload, into space. With a more efficient combustion process, and an oxidant that could be “scooped up” in the atmosphere, a disk-shaped craft could be propelled to great speed and altitude, before having to fall back on a rather small amount of oxidant that may be carried as liquefied air or liquid nitrogen.

I found a discussion of this on the net, here, which I reproduce below in shortened and slightly edited form:“Dr Plichta can use his concepts of cyclic mathematics to effect a revolution in space travel. He has already received several patents for the construction of a disc-shaped reusable spacecraft which will be fueled by the diesel oils of silicon. The special feature of these carbon analog substances is that they do not only burn with oxygen, but also with nitrogen. Such a spacecraft can use the atmosphere for buoyance. Its engines can inhale air and thus do without the standard oxidant reservoir.


In 1970 Peter Plichta disproved the textbook theory that the higher silanes are unstable. One of his achievements was to create a mixture of silanes with the chain lengths 5 to 10 (Si5H12 to Si10H22). He also managed to separate the oil into the individual silanes by of means gas chromatic analysis. This showed the surprising result that silanes with a chain length of over 7 silicon atoms will no longer ignite spontaneously and can thus be used for commercial purposes.

Multi-stage rockets function from the mathematical point of view according to principles of rocket ascent. At the first stage of the launch they have to lift their whole weight with the power of fuel combustion. Because they quickly lose weight as they use up fuel, they then accelerate although the power of thrust remains the same. The discarded stages are burned in the atmosphere, which can only be described as a ridiculous waste of money. The Space Shuttle was intended to make space travel less costly; but actually the opposite has happened. Just as the invention of the wheel made all human transport easier, a circular spacecraft will some day soon replace the linear design of current multi-stage rockets. We are all familiar with the elegance with which a disc or a Frisbee is borne by the air through which it flies.

Peter Plichta got the idea of constructing a disc in which jet-turbines attached to shafts would drive two ring-shaped blade rings rotating in opposite directions. This will cause the disc to be suspended by the air just like a helicopter. The craft can then be driven sideways by means of a drop-down rocket engine. When a speed of over 200 km/h has been reached, the turbines for the blade rings will be switched off and covered to enhance the aerodynamic features of the shape. The craft will now be borne by the up-draught of the air, just like an aircraft is. This will also mean that the critical power required for rocket ascent will not be necessary. When the spacecraft accelerates into orbit, the N2/O2 mixture of the air will first be fed in through a drop-down air intake, as long as the craft is still at a low altitude of 30 km (1 per cent air pressure). The air will be conducted to the rocket motor and the craft will thus accelerate to a speed of 5000-8000 km/h. This is where a standard rocket jettisons its first stage, because by then about 75% of the fuel has already been used up.

The disc on the other hand will continue to accelerate to 20,000 km/h and will thus reach an altitude of about 50 km (1 per thousand of air pressure). The speed will increase as the air pressure drops, so that the process can be continued until an altitude of about 80 kilometers and 25,000 km/h can be maintained. In order to reach the required speed of 30,000 km/h and an altitude of around 300 km, only a relatively small quantity of oxidation agent will be needed at the end.

In the hot combustion chamber silanes decompose spontaneously into hydrogen and silicon radicals. The hydrogen is burned by the oxygen in the air and water formed. Because molecular nitrogen is very tightly bonded, it must be preheated and subjected to catalytic dissociation. The extremely hot silicon radicals will provide additional support for this process, which will in turn lead to silicon nitride being formed. In order to burn superfluous nitrogen, Mg, Al or Si powder can be added to the silane oil.

When the spacecraft returns from space the ceramic-protected underside of the disc will brake its speed to approximately 500 km/h. Then the covering will open again, making the blade rings autorotate. The jet turbines will then be started for the actual landing operation.

In 2006, Plichta developed a new low-cost procedure for the production of highly purified silicon. This makes it possibile to hypothesize a more widespread use of silanes. If widely and cheaply available one day, the new fuel could be used in turbines and modified internal combustion engines, in addition to space rocket use.

Large-scale production of silanes

In order to use long-chain silanes as a fuel, the possibility of large scale production of those silicon oils will have to be experimentally confirmed. According to Plichta, this process would also involve production of pure silicon for use in photovoltaic or other industrial applications. High grade energy is needed to transform silicon oxide into pure silicon, to be hydrated producing the silanes.

One possibile way to go about this is to use photovoltaic electricity to disassociate hydrogen and oxygen from water. Those gases could then be used to process sand into pure silicon and to obtain silanes.

Another procedure, widely used today, is to purify silicon dioxide using heat from coal, but Plichta has now developed a new process that would use tar, pitch and bitumen as well as aluminium silicate to produce pure silicon and silanes at a very low cost. The highly exothermic process produces large amounts of hydrogen and it involves super heated hydrogen fluoride. Monosilanes, a by-product of this new process, could be reacted with carbon dioxide to obtain water and silicon carbide, an extremely hard substance and industrial raw material.

Details are still confidential. The process is being patented.
Turbines and engines

Since the silane combustion process is substantially different from that of the hydrocarbons used today, specially designed turbines and engines will be needed to make use of the new fuel. Dr Plichta has patented a turbine that would optimally use the silicon-based combustion process.


A mixture of silane oil (10) and silicon powder (11) are mixed and injected by a pump (7) into the main combustion chamber. There the fuel is burned together with pre-heated air (8). In the secondary combustion chamber (2) the fuel mix is further burned with a large amound to cold air (9), quickly lowering the temperature of the gases from about 2000 degrees C to a few hundred degrees. This brings a large pressure increase. If the silicon nitride powder produced by the combustion process were too hot and not diluted with air, it would destroy the turbine blades.

The resulting mixture of gases (H2O, O2, and Si3N4 of oily consistency) is now able, in the turbine chamber (3), to cause the turbine blades to rotate. The rotation is transmitted over a connected shaft (5) to the compressor chamber (4) where air is aspired through air inlets (6). The air is mostly conducted into the secondary combustion chamber (2) and a small part of it goes, after heating, to the first combustion chamber (1). The the absorption of heat by the air also provides needed cooling of the combustion chambers.

The water vapor produced by the combustion process leaves the turbine through exhaust openings (21) while the cooled down, solid silicon nitride is trapped in dust bags (20), ready to be passed on for later industrial uses.
Internal combustion engines of the Otto and Diesel type would suffer breakdown of lubrication if made to burn silicon oils. The temperatures of combustion are considerably higher than those reached by gasoline or diesel. But according to Plichta, the Wankel-type rotary piston motor could be modified to accomodate the high temperatures. It parts would have to be coated with silicon nitride ceramics or be entirely constructed using the even harder silicon carbide.

The silane oils could not be compressed together with air, they would have to be injected at the point of maximal compression. The silicon nitride contained in the combusting fuel/air mixture would initially be in gaseous and liquid form, providing the necessary lubrification and acting as a sealant. Exhaust gases, still very hot, could be further burned in a turbine, with the addition of cold air as in the second stage of Plichta’s turbine design.

Like in the turbine, combustion in this engine would produce small amounts of silicon nitride in powder form, which would be filtered out from the exhaust gases and collected by filling stations, to be passed on for industrial uses.
Solar energy and silanes – closing the circle

Solar energy can be transformed into electricity without much trouble, but not everything in this technological world can be run with electricity. Storage is a problem as battery technology definitely is not up to the task yet. One way around that is to produce hydrogen with solar energy and use the hydrogen as a fuel. This is problematic because of the volatility and the relatively low energy density of molecular hydrogen.

Bringing silicon into this cycle would allow us to continue using liquid fuels where needed, and given that silanes store energy at a higher density than hydrocarbons, and definitely at a higher density than pure hydrogen, this may be a good route to choose.

There are no byproducts of this cycle that would have to be vented into the environment and be destructive. The principal “exhaust gas” from silane combustion, silicon nitride, is a valuable industrial raw material that can easily be collected and recycled into technical and construction uses.

In case there would be “too much of a good thing” or an overabundance of silicon nitride, the powder could also be chemically transformed using sodium hydroxide (NaOH) or potassium hydroxide (KOH). The transformation would produce ammonia (NH3) and water soluble silicates. The silicates are non-toxic and will degrade in ambient air to form sand crystals.

Although ammonia is atoxic gas, since it burns without any toxic residues and without carbon emission, it could be used in the production of further energy, or even as a fuel in cars, as proposed by Burning ammonia with air produces steam and pure nitrogen.

• 4 NH3 + 3 O2 -> 2 N2 + 6 H2O

Other uses for ammonia would be the production of nitrogen rich fertilizer, dynamite or household ammonia which is ammonia diluted in water.

The complete solar/silane cycle would involve the production of pure silicon from sand, either using solar energy or tars and bitumens. The next step is the synthesis of higher silanes. Plichta proposes to use a modified high pressure Muller-Rochow synthesis for this step. Then silanes could be burned in modified turbines and engines, or used in space propulsion systems. The fourth step is the re-cycling and re-use of the principal product of combustion, silicon nitride. What is not used industrially, can be chemically transformed into ammonia, which again produces nitrogen which was used in step 3 for combustion.

The pure silicon produced in step 1 would be of use in the production of more and cheaper solar panels to more efficiently capture the sun’s free energy.

Read more :

CitiGroup: The End of Pax Americana, With No Replacement in Sight.

Posted in ATS Thread, Finance, Warnings on January 19, 2016 by betweentwopines


Post by SkepticOverlord at ATS

Just a few days ago, CitiGroup released a new report that reads like an alarm klaxton sounding the end of the current geopolitical world order, with nothing but chaos to follow. Indeed, the “banksters” are admitting that everything we thought we knew about how the world operates is either wrong, or coming to a jarring end. The report is centered around the idea that “Pax Americana” has either already ended, or is in its final death throes.

“Pax Americana” is the term often used to define the (previously) current geopolitical order. That the general peace and stability of the Post-WWII world is due to America’s dominant economic and political power, backed up by its ridiculously large military. In its analysis, Citi is certain that the Pax Americana era is over, but the main problem for the future is what they’re calling the “Great Power Sclerosis.” In other words, there’s nothing to replace Pax Americana, other than chaos, disorder, and a great many panicking investment bankers.

The full PDF of the report, “Global Political Risk: The New Convergence Between Geopolitical and Vox Populi Risks, and Why It Matters” is available here, and I encourage all to take the time to read its 70+ pages. Because, there are times when the pragmatic warnings, predictions, and fears of those advising investors are very important to we, the skeptical. This is most certainly one of those times.

Some key excerpts follow…

The report begins with:

2016 has begun, as 2015 ended, amid a significant worsening of the global political climate and along with that, considerable volatility in financial markets. Investors and businesses are increasingly aware of the need to understand the drivers and the implications of a greater level of event risk exacerbated by shifting social patterns.

Well, tell us something we didn’t know.

In the section titled, “Is This the Dawning of a New Era,” with regard to the death of Pax Americana:

What’s more, we see little sign of this trend of political risk cutting across advanced and emerging economies reversing. We think it’s unlikely that the moderate global growth that Citi’s economists forecast as their central scenario will dampen these risks. If anything, the data we have analyzed for this report, combined with our combined expertise in comparative political science and international relations and security and defense analysis, underscores how, by many measures, these risks are on the rise and indeed could endanger even the already modest prospects for global growth

In other words, translated from investor-speak; put on the air mask and assume crash-landing position.

And invocation of the dreaded, “Black Swan”

In our view, political and business leaders will need to be more attuned to the new shape of global political risk, a paradigm shift that means that previous policies will fail to keep pace and uncertainty will remain high, with the potential to interact in unexpected ways. Among the key implications of this more fragile and interconnected risk outlook is that so-called Black Swan events — in this case, geopolitical events producing instability spanning several orders of magnitude — may be both more likely and more difficult for leaders and global financial institutions to resolve.

The Black Swan theory was developed by Nassim Nicholas Taleb, as the disproportionate role of high-profile, hard-to-predict, and rare events that are beyond the realm of normal expectations in history, science, finance, and technology. The 9/11 attacks are a prime example of a Black Swan.

The report closes with this bit of doom-porn:

Over the long-term, failure to devise policies to address middle class anxiety and declining living standards increases the likelihood that Vox Populi risk — including mass protests and government collapses — could move from being episodically disruptive to systemic, undermining globalization in the process. And we are deeply concerned that the political capital necessary to stem the refugee crisis and terrorist threat, perhaps best-characterized as the collision between previous foreign policy failures and current governance capacity, exceeds that available to government leaders, who have relied upon central banks to manage the lion’s share of global crises over the past several years. 2016 could be a very political year for markets.

Overall, the report is an excellent analysis of the condition our condition is in, which isn’t good by any measure. It’s very enlightening read, and worth your time.

Ron Paul Warns: “Watch The Petrodollar”

Posted in Energy, Finance, Politics, Warnings on January 19, 2016 by betweentwopines


The chaos that one day will ensue from our 35-year experiment with worldwide fiat money will require a return to money of real value. We will know that day is approaching when oil-producing countries demand gold, or its equivalent, for their oil rather than dollars or euros. The sooner the better. – Ron Paul

Ron Paul is calling for the end of the petrodollar system. This system is one of the main reasons the U.S. dollar is the world’s premier reserve currency.

Essentially, Paul is saying that understanding the petrodollar system and the forces affecting it is the best way to predict when the U.S. dollar will collapse.

Paul and I discussed this extensively at one of the Casey Research Summits. He told me he stands by his assessment.

Nick Giambruno and Ron Paul

This is critically important. When the dollar loses its coveted status as the world’s reserve currency, the window of opportunity for Americans to protect their wealth from the U.S. government will definitively shut.

At that point, the U.S. government will implement the same destructive measures other desperate governments have used throughout history: overt capital controls, wealth confiscation, people controls, price and wage controls, pension nationalizations, etc.

The dollar’s demise will wipe out the wealth of a lot of people. But it will also trigger political and social consequences likely to be far more damaging than the financial fallout.

The two key takeaways are:

  1. The U.S. dollar’s status as the premier reserve currency is tied to the petrodollar system.
  1. The sustainability of the petrodollar system relies on volatile geopolitics in the Middle East (where I lived and worked for several years).

From Bretton Woods to the Petrodollar

The Bretton Woods international monetary system, which the Allied powers created in 1944, turned the dollar into the world’s premier reserve currency.

After WWII, the U.S. had by far the largest gold reserves in the world (around 706 million ounces). These large reserves – in addition to winning the war – allowed the U.S. to reconstruct the global monetary system around the dollar.

The Bretton Woods system tied virtually every country’s currency to the U.S. dollar through a fixed exchange rate. It also tied the U.S. dollar to gold at a fixed exchange rate.

Countries around the world stored dollars for international trade or to exchange with the U.S. government at the official rate for gold ($35 an ounce at the time).

By the late 1960s, excessive spending on welfare and warfare, combined with the Federal Reserve monetizing the deficits, drastically increased the number of dollars in circulation relative to the gold backing them.

Naturally, this made other countries exchange more dollars for gold at an increasing rate. This drained the U.S. gold supply. It dropped from 706 million ounces at the end of WWII to around 286 million ounces in 1971 (a figure supposedly held constant to this day).

To stop the drain, President Nixon ended the dollar’s convertibility for gold in 1971. This ended the Bretton Woods system.

In other words, the U.S. government defaulted on its promise to back the dollar with gold. This eliminated the main motivation for other countries to hold large U.S. dollar reserves and use the U.S. dollar for international trade.

With the dollar no longer convertible into gold, demand for dollars by foreign nations was sure to fall, and with it, the dollar’s purchasing power.

OPEC, a group of oil-producing countries, passed numerous resolutions after the end of Bretton Woods, stating its need to maintain the real value of its earnings. It even discussed accepting gold for oil. Ultimately, OPEC significantly increased the nominal dollar price of oil.

For the dollar to maintain its status as the world’s reserve currency, the U.S. would have to concoct a new arrangement that gave foreign countries a compelling reason to hold and use dollars.

The Petrodollar System

From 1972 to 1974, the U.S. government made a series of agreements with Saudi Arabia. These agreements created the petrodollar system.

The U.S. government chose Saudi Arabia because of its vast petroleum reserves, its dominant position in OPEC, and the (correct) perception that the Saudi royal family was corruptible.

In essence, the petrodollar system was an agreement that the U.S. would guarantee the survival of the House of Saud. In exchange, Saudi Arabia would:

  1. Use its dominant position in OPEC to ensure that all oil transactions would happen in U.S. dollars.
  1. Invest a large amount of its dollars from oil revenue in U.S. Treasury securities and use the interest payments from those securities to pay U.S. companies to modernize the infrastructure of Saudi Arabia.
  1. Guarantee the price of oil within limits acceptable to the U.S. and prevent another oil embargo by other OPEC members.

Oil is the world’s most traded and most strategic commodity. Needing to use dollars for oil transactions is a very compelling reason for foreign countries to keep large U.S. dollar reserves.

For example, if Italy wants to buy oil from Kuwait, it has to purchase U.S. dollars on the foreign exchange market to pay for the oil first. This creates an artificial market for U.S. dollars that would not otherwise exist.

The demand is artificial because the U.S. dollar is just a middleman in a transaction that has nothing to do with a U.S. product or service. Ultimately, it translates into increased purchasing power and a deeper, more liquid market for the U.S. dollar and U.S. Treasuries.

Additionally, the U.S. has the unique privilege of not having to use foreign currency to buy imports, including oil. Instead, it gets to use its own currency, which it can print.

It’s hard to overstate how much the petrodollar system benefits the U.S. dollar. It’s allowed the U.S. government and many Americans to live beyond their means for decades.

What to Watch For

The geopolitical sands of the Middle East are rapidly shifting.

Saudi Arabia’s strategic regional position is weakening. Iran, which is notably not part of the petrodollar system, is on the rise. U.S. military interventions are failing. And the emerging BRICS countries are creating potential alternatives to U.S.-dominated economic/security arrangements. This all affects the sustainability of the petrodollar system.

I’m watching the deteriorating relationship between the U.S. and Saudi Arabia with a particularly close eye.

The Saudis are furious because they don’t think the U.S. is holding up its end of the petrodollar deal by more aggressively attacking their regional rivals.

This suggests that they might not uphold their part of the deal much longer, namely selling their oil exclusively in U.S. dollars.

The Saudis have even suggested a “major shift” is under way in their relationship with the U.S. To date, though, they haven’t matched their words with action, so it may just be a temper tantrum or a bluff.

The Saudis need an outside protector. So far, they haven’t found any suitable replacements for the U.S. In any case, they’re using truly unprecedented language.

This situation may reach a turning point when U.S. officials start expounding on the need to transform the monarchy in Saudi Arabia into a “democracy.” But don’t count on that happening as long as Saudi oil sells exclusively for U.S. dollars.

Regardless, the chances that the Kingdom might implode on its own are growing.

For the first time in decades, observers are calling into question the viability of the Saudi currency, the riyal. The Saudi central bank currently pegs the riyal at a rate of 3.75 riyals per U.S. dollar.

The Saudi government spends a ton of money on welfare to keep its citizens sedated. Lower oil prices plus the cost of their mischief in the region are cutting deep into government revenue. So there’s less money to spend on welfare.

There’s a serious crunch in the Saudi budget. They’ve only been able to stay afloat by draining their foreign exchange reserves. That threatens their currency peg.

Recently, Saudi officials have begun telling the media that the currency peg is fine and there’s nothing to worry about. That’s another clue that there’s trouble. Official government denial is almost always a sign of the opposite. It’s like the old saying: “Believe nothing until it has been officially denied.”

If there were a convenient way to short the Saudi riyal, I would do it in a heartbeat.

Timing the Collapse

Long before Nixon ended the Bretton Woods system in 1971, it was clear that a paradigm shift in the global monetary system was inevitable.

Today, another paradigm shift seems inevitable. As Ron Paul explained, there’s one sure way to know when that shift is imminent:

We will know that day is approaching when oil-producing countries demand gold, or its equivalent, for their oil rather than dollars or euros.

It’s very possible that, one day soon, Americans will wake up to a new reality, just as they did in 1971 when Nixon severed the dollar’s final link to gold.

The petrodollar system has allowed the U.S. government and many U.S. citizens to live way beyond their means for decades. It also gives the U.S. unchecked geopolitical leverage. The U.S. can exclude virtually any country from the U.S. dollar-based financial system…and, by extension, from the vast majority of international trade.

The U.S. takes this unique position for granted. But it will disappear once the dollar loses its premier status.

This will likely be the tipping point…

Afterward, the U.S. government will be desperate enough to implement capital controls, people controls, nationalization of retirement savings, and other forms of wealth confiscation.

I urge you to prepare for the economic and sociopolitical fallout while you still can. Expect bigger government, less freedom, shrinking prosperity…and possibly worse.

It’s probably not going to happen tomorrow. But it’s clear where the trend is headed.

Once the petrodollar system kicks the bucket and the dollar loses its status as the world’s premier reserve currency, you will have few, if any, options to protect yourself.

This is why it’s essential to act before that happens.

The sad truth is, most people have no idea how bad things could get, let alone how to prepare…

Yet there are straightforward steps you can start taking today to protect your savings and yourself from the financial and sociopolitical effects of the collapse of the petrodollar.

This recently released video will show you where to begin. Click here to watch it now.

Source :

Scientists Find Hints for the Immortality of the Soul

Posted in Cosmos, Paranormal, Science, Sprituality on January 18, 2016 by betweentwopines

By Dr. Rolf Froböse

Some international physicists are convinced, that our spirit has a quantum state and that the dualism between the body and the soul is just as real to as the “wave-particle dualism” of the smallest particles.

Dr. James G. of San Francisco, a former coworker of the German Max-Planck Society in Frankfurt, reported the following incredible story. “I studied not only in the USA, but I also studied chemistry in London for a few semesters. When I came to England, the student housing was full, so I added my name to a waiting list. A short time later, I received the joyous news that a room had become available. Shortly after I had moved in, I awoke one night and in the twilight was able to see a young man with curly, black hair. I was terrified and told the alleged neighbor that he had the wrong room. He simply cried and looked at me with great sadness in his eyes.

When I turned on the light, the apparition had disappeared. Since I was one hundred percent sure it had not been a dream, I told the housemaster about the strange encounter the next morning. I gave her a detailed description of the young man. She suddenly paled. She looked through the archives and showed me a photo. I immediately recognized the young man who had visited me in my room the evening before. When I asked her who he was, she replied with a quivering voice that it was the previous renter. She then added that my room had become available because he had taken his life shortly before.”The author would never have recorded the story had “James” not been an absolutely trustworthy person.

Prof. Dr. Hans-Peter Dürr, former head of the Max Planck Institute for Physics in Munich, represents the opinion that the dualism of the smallest particles is not limited to the subatomic world, but instead is omnipresent. In other words: the dualism between the body and the soul is just as real to him as “wave-particle dualism” of the smallest particles. According to his view,

a universal quantum code exists that applies for all living and dead matter. This quantum code supposedly spans the entire cosmos. Consequently, Dürr believes – again based on purely physical considerations – in an existence after death. He explains this as follows in an interview he gave:

“What we consider the here and now, this world, it is actually just the material level that is comprehensible. The beyond is an infinite reality that is much bigger. Which this world is rooted in. In this way, our lives in this plane of existence are encompassed, surrounded, by the afterworld already. When planning I imagine that I have written my existence in this world on a sort of hard drive on the tangible (the brain), that I have also transferred this data onto the spiritual quantum field, then I could say that when I die, I do not lose this information, this consciousness. The body dies but the spiritual quantum field continues. In this way, I am immortal.”

Dr. Christian Hellweg is also convinced the spirit has a quantum state. Following his studies in physics and medicine, he researched brain function at the Max Planck Institute for Biophysical Chemistry in Göttingen for many years. He was able to show that information in the central nervous system can be phase encoded. In recent years, he has dedicated himself to studying the body-soul issue and researching phantom perceptions and hallucinations. He is especially interested in tinnitus, a phantom perception in the sense of hearing. He has also specialized in the therapy thereof. He summarizes his thesis as follows:

“Our thoughts, our will, our consciousness and our feelings show properties that could be referred to as spiritual properties…No direct interaction with the known fundamental forces of natural science, such as gravitation, electromagnetic forces, etc. can be detected in the spiritual. On the other hand, however, these spiritual properties correspond exactly to the characteristics that distinguish the extremely puzzling and wondrous phenomena in the quantum world. Quantum world, in this case, refers to that realm of our world that is not yet factual; in other words, the realm of possibility, the realm of uncertainty, where we do “know what”, but do not exactly “know when or how”. Based on the context of traditional physics, it can, out of necessity, be concluded that this realm must actually exist.”

American physicist John Archibald Wheeler hits a similar nerve, “Many scientists hoped…that the world, in a certain sense, was traditional – or at least free of curiosities such as large objects being in the same place at the same time. But such hopes were crushed by a series of new experiments.” There are now university research teams examining the interaction between consciousness and material. One of the leading researchers in this field is physicist Professor Robert Jahn of Princeton University in New Jersey. He concludes that if effects and information can be exchanged in both directions between the human consciousness and the physical environment, then one must also assume a resonance or “molecular binding potential” for the consciousness as well. In summary: according to this theory, one would have to award the consciousness the known quantum properties as well. In his opinion it makes no sense, to assign terms such as information or resonance to either the physical world or the spiritual consciousness or to separate physical effects from spiritual effects.

Quantum physicist David Bohm, a student and friend of Albert Einstein, made similar claims. His summary: “The results of modern natural sciences only make sense if we assume an inner, uniform, transcendent reality that is based on all external data and facts. The very depth of human consciousness is one of them.”

Nuclear physicist and molecular biologist Jeremy Hayward of Cambridge University makes no secret of his convictions either: “Many scientists who are still part of the scientific mainstream are no longer afraid to openly state that consciousness / awareness could, in addition to space, time, matter and energy, be a fundamental element of the world – perhaps even more fundamental than space and time. It may be a mistake to ban the spirit from nature.” It is even questioned as to whether matter should be considered a fundamental element of the universe.

Source :

Source: Rolf Froboese: “The Secret Physics of Coincidence. Quantum phenomena and fate – Can quantum physics explain paranormal phenomena?” Publisher BoD, 2013, 112 pages, ISBN: 3848234459