Monday, November 25, 2013

The International Space Station Problems and Issues.

A project of more than 15 nations, the International Space Station (ISS) is designed to an inhabited satellite orbiting the Earth. With the initial parts of the station completed in 1998, the station was finally deployed with a crew of two astronauts-one American and two Russians (NASA, 2010). But after the destruction of the Columbia Space Shuttle in 2003, disintegrating upon re-entry into the Earths atmosphere, NASA was forced to stop construction activities on the ISS as well as cast aspersions on the continuance of the Hubble Space Telescope program. As a result of the tragedy, ISS construction activities fell years behind schedule (Traci Watson, 2006). Apart from the Columbia tragedy that bought the program to a halt, another issue was the funding requirements of the station. The American space agency discovered that the stations expenses were more than 5 billion over what was originally alloted for the program. As such, NASA was forced to scale down on the activities for the station, which some of the partners, namely those in Europe and Japan, balked at (NASA, 2010). More than two decades since its initial launch, the ISS is considered to structurally complete, and is expected to remain in orbit until 2016, when the station is expected to be decommissioned (Joel Achenbach, 2999).

Mars Rover and Pathfinder Differences and Discovery
   
The main mission of the Mars Pathfinder mission, in tribute to the Sagan Memorial Station (AAAS, 1997), is to demonstrate the workability of low cost landings and examination of the Martian lunar surface. The goal is deemed to be met in the form by communication tests between the lander and the rover, and communications between the lander and Earth, and testing of the sensors and imaging mechanisms of the rover (National Space Science Data Center, 2005). The Mars Exploration Rover Mission, on the other hand, is designed to settle a set of mobile laboratories on the Martian surface, and to conduct fieldwork by the robots on board, namely Spirit and Opportunity. These rovers and see and maneuver around small obstacles and move toward target areas that are chosen by scientists on the basis of the images sent by the rovers (Mars Institute, 2009).

The most significant discovery by the MER was discovered by accident, as the wheels of one of the rovers, Spirit, were inadvertently stuck, and the wheels of the rover continued to spin. This spinning led to the unearthing to soil with a high level of sulfates. These sulfates, usually found near stem vents or hydrothermal bodies of water, suggested that the red planet once held water, and these indications held that the planet may have supported life on its surface (Charles Choi, 2009).

Founders of Astronomy.

If people can now see further, it is because they stand on the shoulders of giants.  Isaac Newton explained this to his scientific rival, Robert Hooke, which Newton borrowed from Bernard of Chartres.  Indeed, mankind borrows from each others ideas and synthesizes it into new integrated ideas to see further.  This has been done since the era of ancient Indians, Arabs and Chinese, moving westwards to Greece, Eastern Europe, Western Europe and America.  Thus, man leapt forward from the works of Asia, Copernicus, Kepler, Galileo, Newton, Einstein, NASA and Asia again.

Thousands of years ago, many ancients held a geocentric view, but some people challenged this perspective.  While ancient Greeks like Aristotle (4th century BCE) believed that the earth is the center of the universe, with celestial spheres revolving around it, other Greeks, such as Aristarchus of Samus (3rd century BCE) believed in heliocentrism.   But most people rejected his work, favoring Aristotle and Ptolemy (2nd century CE).

Some Indians and Arabs also held the heliocentric view.  Aryabhata (5th century CE), an Indian astronomer, wrote the heliocentric Aryabhatiya, which was translated to Latin later for Europeans.  Other Indian and Arab astronomers such as Brahmagupta, Varahamihira, Bhaskara II, Abu Rayhan Biruni and al-Sijzi had similar perspectives.  Al-Sijzi (9th century CE) invented an astrolabe based on heliocentrism, but his scientific peers did not take his work seriously (Nasr 135).

But in the 16th century, the heliocentric view gained strength.  Copernicus updated the heliocentric perspective using Arab tools from scholars such as Ibn al-Shatir, Nasir al-Din al-Tusi, Moayyeduddin Urdi, Arzachel, Averroes and Albategni.  However, the Church was skeptical with it.  Kepler would also reinstate Aryabhatas theory that planetary orbits are elliptic.  Galileo would also continue the work of Brahmagupta, proposing more precise laws of gravity.  He also marketed the heliocentric theory with passion, resulting in conflict with the Church government.  Galileo spent his later years under house arrest because of this.
In the 17th century, Isaac Newton continued Galileos work.  Protestantism was helpful in the advancement of science because scholars such as Newton were no longer bound by the Catholic Church.  Scientific work was now taken seriously.  He published the three elementary laws of motion namely, the law of inertia, the law that relates force, mass and acceleration, and the law that for every action, there is an equal and opposite reaction.  These laws help rocket scientists, for example, to know how much force is needed to propel a rocket into space.  While in space, the rocket keeps moving without any additional force, but to navigate it, a force in the opposite direction is needed to move it to the other side.  Without the knowledge of these laws, probes and rockets in space would become lost or out-of-control.

In the 20th century, Albert Einstein revolutionized astrophysics.  Combining the work of his predecessors, he formulated the famous equation emc2 and produced the General Theory of Relativity, which is a precise theory about gravity.  This theory helped predict anomalies in astrophysics.  It is also used today regularly by hikers and motorists in the form of GPS satellite technology.  Without his theory, GPS readings would become very imprecise.  This is also true for probes in space.  They would all possibly become lost without it.

In the later part of the 20th century, space missions became common.  In 1957, after the successful Russian launch of the Sputnik satellite, America became alarmed and created NASA the following year.  Motivated by fear, America used nationalized German Americans to develop manned and unmanned missions to space.  Manned missions include Project Mercury (testing human survivability in space), Project Gemini (preparation for lunar missions), the Apollo program (eventually landing humans on the moon), Skylab (the first US space station), ASTP (the first US-Soviet mission), the Space Shuttle Program (using re-usable space vehicles) and the International Space Station (an orbiting research facility).

Unmanned NASA programs were also numerous.  It includes the Mariner program (investigating Mars, Venus and Mercury), the Pioneer Program (the most notable explored the outer planets), the Voyager Program (exploring Jupiter, Saturn and the outer solar system), the Viking program (investigating Mars), the Helios probes (exploring the sun), the Hubble Space Telescope (looking deeper into space), the Magellan probe (exploring Venus), the Galileo probe (studying Jupiter and its moons), the Mars Global Surveyor, Pathfinder and Exploration Rovers and the New Horizons Probe, which is expected to explore Pluto and its moons by 2015.

Unfortunately, with the collapse of the US economy, US space missions are on hold while Asian missions accelerate.  With its interest in mining Helium-3 from the moon, India launched a lunar probe in 2008 to search for it.  China is also sending a manned mission to the moon in 2013 to prepare for possible Helium-3 mining operations.  As Newsweek reports, it is expected to supercede all the Apollo missions.

Mariner and Voyager Space Programs.

Any discussion about the Mariner and Voyager space programs would have to include the history of the United States National Aeronautics and Space Administration (NASA), because NASA managed both Mariner and Voyager space programs. The longevity of any space program is directly linked to NASAs history, and the problems that NASA encountered, while trying to promote and launch the space programs.
   
NASA is an independent, civilian United States government agency, established in 1958, as direct result of the Soviet Unions launch of their first satellite (named Sputnik 1), on October 4, 1957. NASA was created to compete with the Russians, on space projects, and is credited for improving international cooperation, exploration of the universe and solar system, and providing explanation how the earth and the solar system functions. Currently, NASA oversees all space science projects, operates the space shuttle, and launches approximately half of all military space missions.

There was great support for NASA, during the John F. Kennedy administration (1960s), and government accepted the challenge to land a man on the moon, in the future, while they saw themselves competing with Russia. During the time of initial creation, NASA was viewed as a necessary part of the government, and the United States Congress funded various programs, acquired many facilities, and hired many employees for the space projects. During the 1960s, the United States became a strong leader of space exploration.
The positive atmosphere changed, however, when times were not so peaceful. As the United States moved into 1970, the unpopular Vietnam war shifted the focus away from space exploration, and the budgets for NASA programs were cut dramatically. NASAs annual budget, which had reached 5 billion in the mid-1960s, and stood at almost 4 billion in 1969, was reduced to 3.7 billion in 1970 and just over 3 billion in 1974.

NASAs popularity never fully recovered to the point where the government agreed to fund the all the space programs that NASA had planned, and NASA was a continuous struggle to acquire funds, to attain the goals that NASA had established. On the one hand, the United States wanted to maintain their position as a worldwide space explorer, however, the budget committees would not provide enough funds for NASA to attain or maintain the programs they planned. It was a downhill struggle, and space program survival became dependent upon available funds.

NASA restructured and downsized several times, from the large organization of which it began, to a smaller, more controlled government agency. The United States did not want to forfeit the space program to Russia, although they were never in the position to provide unlimited funds for any of the future space programs. Ultimately, the United States space programs progress and successes suffered, due to severe cuts in NASAs requested budgets, year after year. Many of the space programs were started, and never finished, because the cost of repairing problems. Many of the programs had to be consolidated, because the government would not allow unnecessary project spending.

Despite continuous problems, rising costs, and funding cuts by the government committees, NASA provided space programs that brought a rich knowledge of the universe. There is no question that progress would have been further ahead, if more support and funding would have been available to NASA, for the space programs. Several space programs survived to produce amazing information. Two of the most successful United States space programs, that survived the ongoing  budget cuts, are the Mariner and Voyager space programs.

Mariner Space Program
   
The Mariner Space Program survived because it was in the human interest category there was a popular interest in visiting the planet Mars, and the Mariner project was specifically created to do flyby and take pictures of Mars, as well as investigate Venus and Mercury. NASAs funding were far less than they had requested, and not supported, leaving NASA with the thoughts that Mariner was doomed, from the beginning. The 5.25 billion approved by Congress was 195 million less than the agency had requested. Despite budget cuts, NASAs panel stressed the importance to investigate these nearby planets, for possible use in the future, and the project was given priority however, the government wanted NASA to develop and complete the program in less than half the time.

All Mariner spacecraft were based on a hexagonal or octagonal bus, which housed all of the electronics, and to which all components were attached, such as antennae, cameras, propulsion, and power sources.
Mariner 2 (also known as Mariner-Venus 1962) was launched shortly after, on August 27, 1962.

Mariner 2 was a backup for Mariner 1, and on December 1962, was the first to successfully flyby Venus.

Mariner 3 was launched November 5, 1964. Its destination was Mars, but failed when it was unable to use its solar panels to energize its batteries. Mariner 3 remains in solar orbit.

Mariner 4 (also known as Mariner-Mars) was launched on November 28, 1964. It was designed to observe Mars and relay information to Earth. On December 21, 1967, communication with Mariner 4 was terminated, due to the exhaustion of the gas supply, an altitude decrease, and weakness of signal.

Mariner 5 (also known as Mariner Venus 67) was launched June 14, 1967. Mariner 5 was a backup for

Mariner 4. The spacecraft instruments measured interplanetary magnetic fields, charged particles, and plasmas, as well as the radio refractivity and UV emissions of the Venusian atmosphere. The mission was termed a success. (Mariner 1962-1975 httpfiler. case.edusjr16advanced20th_far_mariner.html January 11, 2006)

Mariner 6 (also known as Mariner F, Mariner Mars 69A) was launched February 24, 1969, and Mariner 7 (also known as Mariner G. Mariner Mars 69B) was launched March 27, 1969. The primary purpose of Mariner 6 and Mariner 7 was to investigate the surface and atmosphere of Mars by close flybys, and relay the information back to Earth however, there was no backup equipment for all the information accepted, which was later corrected, on future models.

Mariners 6 and 7 were identical teammates in a two-spacecraft mission to Mars. Mariner 6 was launched on February 24, 1969, followed by Mariner 7 on March 21, 1969.
Mariner 8 (also known as Mariner-H) was launched May 8, 1971, and was part of the Mariner Mars 71 project. It consisted of two spacecraft (Mariners H and I) that performed separately, yet complementary, intended to go into Mars orbit and relay data and information back to Earth. Mariner 8 did not launch correctly, and only one of the two spacecraft was functional.

Mariner 9 (also known as Mariner-1, Mariner Mars 71) was launched May 30, 1971.  Mariner 9 arrived at Mars on November 14, 1971, after a 167 day flight, had mapped 70 of the surface of Mars. A 15 minute 23 second rocket burn put the spacecraft into Mars orbit, making Mariner 9 the first spacecraft to orbit another planet. Imaging of the surface of Mars by Mariner 9 was delayed by a dust storm that started on September 22, 1971, in the Noachis region. The storm quickly grew into one of the largest global storms ever observed on Mars. (Mariner 1962-1975)      

Mariner 10 (also known as Mariner-J, Mariner VenusMercury 73) was launched November 3, 1973. Mariner 10 was the first spacecraft to use advanced computer and solar technology to orbit Venus and Mercury and provide detailed information. Mariner 10 flew by Mercury three times. Cameras were left on, to prevent low temperatures, because heaters for the television failed. Engineers tested and found a low nitrogen supply, and commands were sent to the spacecraft to INCLUDEPICTURE httpupload.wikimedia.orgwikipediacommons660Mariner10.gif  MERGEFORMATINET Mariner 10 (2005) NASA 

Voyager Space Program
   
The Voyager Space Program began in the Spring of 1960, when NASA had money to fund Americas long-range plans for space programs. Voyager was an unmanned space mission, designed to orbit Venus and Mars, and acquire information about the environment, atmosphere, and surfaces of both planets. The program was originally designed to orbit Jupiter and Saturn also budget cuts limited the program to orbit Venus and Mars only, which scientists felt were more important.
   
To Congress, in 1965, the Voyager space program appeared to be just another space program, on NASAs list of many. Pressure was on Congress to keep all costs down however, NASA had invested thousands of hours already, in the Voyager, and did not want to let go.

Further friction developed between the two departments, responsible for the Voyagers success. NASA headquarters and the Jet Propulsion Laboratory, in Pasadena, California, had completely different opinions of how things should be accomplished. The government viewed these departmental differences as NASAs inability to manage their programs, and Congress was convinced not to provide funding for Voyager. Ultimately, this inability to agree, between the two Voyager teams, was a major reason the government was swayed to terminate this space program.

One of the problems that NASA encountered was that Voyager would need to be much larger than Mariner was, and needed more developed equipment to accomplish the tasks with the additional distances. There were needs for more developed designs for the spacecraft. This meant NASA would need more money for the Voyager, than for the Mariner, and the need for more money created more problems. Congress resisted the Voyager program, since money was already provided for other programs. In addition to the higher cost of Voyager, the unpopular Vietnam war was still an issue, limiting additional funds. NASA approached the budget committee several times for money, to fund the Voyager space program, and was refused funding ever time. Eventually, Congress halted all work on the Voyager, and the program was abruptly stopped.
   
Many subcontractors were brought into the projects, to provide designs and products necessary for the Voyager, and high, competitive costs of the subcontractors were a large portion of the budget, they had not encountered before. A host of technical questions erupted, also, including the question of did NASA have the manpower, at this point, to complete the project. All of the technical questions had to be answered before they could begin construction. The mission was divided into phases, to handle the large scope of the plan. All issues and problems were directly related to the budget problem of Voyager. NASA secretly continued with Voyagers plans, while Congress had postponed the program for years, and suspended the program, completely.
   
NASA was determined to reach Mars before the Soviet Union, and cancelled further work on the Mariner space program, to use the funds for the Voyager space program. They believed a larger vehicle would be more capable for this challenge. Unfortunately, the news media found out what NASA was planning, and the Voyager space program was postponed for another two years. Each time the project was postponed, it was estimated it cost were as high as 2 billion, and these extra costs were not included in Congress budget costs. A major breakthrough occurred, in 1967, when President Johnson allocated funds for the Voyager space program, and stated he wanted the space program to continue, for near future launches to Mars.
   
The Voyager space program underwent management structure, and the existing California office was abolished. The program restarted at a slow pace, focusing on the deadline stated. With the Vietnam war continuing, a budget deficit developed, that forced President Johnson to reduce unnecessary expenditures and increase taxes. The Office of Space Science and Applications had asked for 695 million for 1968 (an increase of 88 million over 1967) to provide funds for Voyager (71.5 million) the House reduced the Voyager budget by 21.5 million. NASA would have to reevaluate its space science activities. In late June, a joint House-Senate Conference Committee worked out a compromise budget that restored 42 million to Voyager for 1968.

This occurred during a time, in the United States that included riots and other violent civil rights gatherings, not to mention resentment because of the involvement of America in the Vietnam war. President Johnson was unable to fight the overwhelming desire of Congress, and the Voyager space program was cut, along with the Mariner space program. Even though the space programs were cut, NASA continued, on a smaller scale. The Voyager space probes are probably two of the most famous space probes in history, for they provided us with our first detailed close-up views and scans of the outer planets.
      
Voyager 1 (also known as Mariner JupiterSaturn A) was launched September 5, 1977), after Voyager 2. Voyager 1 did a flyby for Jupiter (March 5, 1979) and Saturn (November 13, 1980) only, taking pictures of both planets. After nearly nine years of dormancy, Voyager 1s cameras were once again turned on to take a series of pictures. (httpwww.absoluteastronomy. comtopics Mariner_program 2010)  On February 14, 1990, Voyager 1 looked back from whence it came and took the first family portrait of the solar system, a mosaic of 60 frames of the Sun and six of the planets (Venus, Earth, Jupiter, Saturn, Uranus, and Neptune) as seen from outside the solar system. After this final look back, the cameras on Voyager 1 were once again turned off. (Voyagers 1977-present httpfiler. case.edusjr16advanced20th_far_voyagers.html September 13, 2006)  If nothing happens to them,.

Mariner program
The Mariner program was a program conducted by the American space agency NASA that launched a series of robotic interplanetary probes designed to investigate Mars, Venus and Mercury...
designated Mariner 11 and Mariner 12, respectively. They were then moved into a separate program named Mariner Jupiter-Saturn, later retitled Voyager, because it was felt that the probes designs had moved sufficiently far from the Mariner family that they merited a separate name.
   
Both the Mariner and Voyager space programs were created to establish the United States as an innovative force in universe and solar system, through NASA.  NASA accomplished their goals, although the major budget cuts became great obstacles, and made growth very difficult. Mariner and Voyager space programs are both considered successful, taking knowledge to the limits and beyond what was ever expected. Modern technology and equipment, combined with innovative design, assisted each space program, and allowed the vehicles to relay information back to Earth, with NASAs ever-continuing hunger to know more about what outer-space is all about, and how Earth relates to the whole solar system. Without the Mariner and Voyager space programs, the United States would not be a competing leader in space knowledge, today. The United States can look forward to bigger and more curious space journeys, because we have reached the plateaus attained through the success, persistence, and determination of those involved in the Mariner and Voyager space programs.

Solar Winds.

It is believed by physicists that the next period of high solar activity is due to start in 2012 and it is during this time that the Earth will experience some of the worst solar storms in decades (Gaea). The astronomical phenomenon that is believed to be mainly responsible for this is the solar wind. Solar winds are responsible for the formation of the beautiful auroras but they can also be destructive in a sense that they can also trigger storms that can interfere with satellites power sources, endanger spacewalkers, and even knock out power grids on Earth (Gaea). Now, considering that the solar cycle is around 11 years, then were in for a tough time in the next 11 years (Gaea). This information may or may not be true but there is only one implication in this latest concern about solar winds and the possibility of the occurrence of a solar storm in 2012  the subject of solar winds need to be fully understood before any conclusion can be made. This paper will delve into the nature of solar winds and hopefully provide information substantial enough for one to make an informed opinion on the possible occurrence of a solar storm in 2012.

DEFINITION OF SOLAR WINDS

High Speed and High Temperature. Solar winds are streams of energized, charged particles made of plasma and primarily made up of electrons and protons which flow outward from the Sun, through the solar system. Solar winds have speeds as high as 900 kms and may reach temperatures of 1 million degrees Celsius (Space Environment). It would be interesting to note that this extremely high temperature of the solar wind is the reason why the Suns gravity cannot hold on to it yet no one understands the details about how and where coronal gases including solar winds are accelerated to such high velocities (Hathaway, The Solar Wind). Nevertheless, despite the extremely high speed, it takes about 4 to 5 days for solar winds to hit the Earths atmosphere (Kubesh et al. 10) and cause noticeable changes in the form of colorful lights in the skycalled the northern lights or aurora borealis and the southern lights or aurora australis (Kubesh et al. 10-11).

Shaping the Magnetic Field and the Environment. With an average speed of 400 kms, solar winds are responsible for the anti-sunward tails of comets as well as the shape of the magnetic fields around the planets (Solar Wind) for in fact, the solar wind carries a magnetic field (Meyer-Vernet 26). Aside from shaping the magnetic fields of planets, solar winds are also responsible for shaping their environments by blowing a huge bubble of supersonic plasma called the heliosphere, which engulfs the planets and a host of smaller bodies. (Meyer-Vernet 1)

Cause of Solar Winds. Solar winds are caused by the hot solar corona, which is the outermost layer of the solar atmosphere and which expands into space (Space Environment). They actually consist of around 1 million tons of hydrogen ejected by the Sun per second and which blows rather gently (Meyer-Vernet 1).
Variations in the Speed of Solar Winds. It is a fact that a solar wind is not entirely uniform throughout space. Its speed may reach as high as 800 kms over coronal holes and as low as 300 kms over streamers. These high and low wind streams interact with each other and such wind speed variations buffet the earths magnetic field and are responsible for the storms in the Earths magnetosphere (Hathaway, The Solar Wind).

FEATURES OF SOLAR WINDS

Solar winds contain a few unique features such as Magnetic Clouds and Co-rotating Interactive Regions, or CRIs. However, more research is needed in order to ascertain the specific qualities of these features.
Magnetic Clouds. Magnetic Clouds are transient ejections in solar winds are detected and produced in the solar winds whenever eruptions such as flares and coronal mass ejections carry material away from the Sun together with embedded magnetic fields.

Co-rotating Interactive Regions. Co-rotating Interactive Regions, on the other hand, are specific regions in the solar wind where streams of material which are believed to be moving at different speeds collide and interact with each other. (Hathaway, Solar Wind Features)

THE SUN CORONA

Definition of Corona. Since solar winds are caused by the hot solar corona, which is the outermost layer of the solar atmosphere (Space Environment), there is therefore a need to discuss it in some detail. The corona, aside from being the Suns outer atmosphere, refers to the rays of light or the pearly white crown surrounding the Sun which are visible during total eclipses. (Hathaway, The Corona)

Temperature of Coronal Gases and Solar Wind. It is a fact that coronal gases are super-heated to temperatures greater than 1,000,000 degrees Celsius, or 1,800,000 degrees Fahrenheit. It is at these high temperatures that the elements hydrogen and helium as well as carbon, nitrogen, and oxygen are completely stripped of their electrons (Hathaway, The Corona), and that the now active hydrogen stripped of electrons make up the solar wind. However, solar winds, in contrast with other coronal gases, are known to have relatively lower temperatures of up to 1,000,000 degrees Fahrenheit or 555,600 degrees Celsius (Kubesh et al. 10).

Coronal Holes. The produced solar winds escape primarily through the coronal holes, or the areas in the Suns corona where it is darker and colder and where the plasma has a lower density than average. These coronal holes are found predominantly near the Suns poles. (The Solar Wind)
INFLUENCE OF THE SOLAR WIND ON EARTH
The influence of solar winds on the solar system is basically also their influence on earth.
General Influences. It is scientifically known that the solar wind engulfs the entire solar system, creates magnetospheres, produces auroras, and modulates the penetrating cosmic rays from the galaxy into the solar system and onto the Earth. (Burch et al. 238)

First Discoveries of Connections between the Earth and the Sun. In fact there have been various scientific investigations through the years on other possible connections between the Sun and the Earth aside from the fact that sunlight reaches the Earth and causes photosynthesis in plants. It was not until the middle of the nineteenth century that when an amateur astronomer in the name of Richard Carrington started drawing sunspots from a projected image of the Sun, he suddenly saw two patches of peculiarly intense light appear and fade within 5 minutes in the largest sunspot group visible (Meyer-Vernet 2). It was later on found out that Carrington was looking at what we now call a solar flare, or an extremely huge explosive energy released by the Sun. He also noticed that some time later, the magnetic field at Earth was strongly perturbedand intense auroras spread over much of the world (Meyer-Vernet 3). Although Carrington was actually not the first to suspect that auroras and magnetic effects on Earth are caused by the Sun, his observations somehow strengthened the theory that there is indeed a connection between the Sun and terrestrial magnetic disturbances (Meyer-Vernet 3).

 A19th century Hypothesis on Solar Winds. One of the first hypotheses about the aforementioned theory was formulated by the 19th century Irish professor of natural and experimental chemistry and physics George Fitzgerald. Firtzgerald hypothesized that matter starting from the Sun can come to Earth only if they are subjected to an acceleration of several times solar gravitation (Meyer-Vernet 3). This hypothesis, which is now known to be a fact, simply means that in order for solar matter, which now refers to solar winds, to reach the Earth, it has to have velocities that would allow it to escape an extremely strong gravitational pull of the Sun. However, as current scientific proof points out to the fact that solar winds travel at speeds as high as 900 kms and at temperatures of 1 million degrees Celsius (Space Environment) to which the Suns gravity cannot hold on (Hathaway, The Solar Wind), then it is finally an established scientific fact that solar winds can actually escape the Sun despite the strong gravity of the latter, and can therefore influence the magnetism of Earth and the formation of auroras.

Possible Effects on Telecommunication. Aside from its influence on the Earths magnetic field and on Earths auroras, solar winds also affect the ionosphere and telecommunication systems (The Solar Wind). Such influence of solar winds mentioned in the preceding statement may be negative as there is a possibility that a burst of particles from a coronal mass ejection detected 5 days earlier by SOHO, a spacecraft observing the Sun, may have been the reason behind the destruction of the Telstar 401 communications satellite on January 11, 1997 (The Solar Wind).

Effects on the Weather. Solar winds are also mainly responsible for weather conditions around the Earth in space (The Sun 10). This is made possible by the emission of a huge blast of particles coming from the Sun. This blast of particles, called coronal mass ejection, makes the solar wind stronger and can cause magnetic storms on earth (Kubesh et al. 10), which is the known cause of changes in compasses as well as radio signals.

SOURCE OF INFORMATION ON SOLAR WINDS
Nothing much can be learned from the sun and the solar winds through telescopes and scientific theories and hypotheses. Much of the raw data known to man about solar winds are a result of the efforts of astronomical societies and the scientific community in bringing to space a number of spacecrafts that would orbit the Sun and relay information to astronomers about its nature.

Ulysses. One of these spacecrafts is Ulysses. Ulysses started the exploration of the poles of the Sun in 1990 and is still orbiting it today and is specifically used by scientists to study the corona and solar wind (The Sun 11). As of 2009, the Ulysses spacecraft has now already completed one orbit through the solar system during which it passed over the Suns south and north polesand it has provided us with a new view of the solar wind (Solar Winds). The Ulysses was able to determine fast and slow solar wind as well as the fact that fast wind is steady and simple but slow solar wind is variable and complicated. (Burch et al. 238)

Solar winds possess the qualities of extremely high speed and high temperature that is why they can escape the Suns atmosphere and thereby create magnetic field changes on Earth and help form auroras in the night sky. However these are not the only characteristics that solar winds have. They also cause changes in weather as well as produce magnetic storms. Moreover, there is a possibility that they can interfere with communication facilities such as satellites, and may even destroy them in the process. Nevertheless, there is definitely a need for further research especially on the exact role of solar winds in the solar cycle, which is actually one extremely complicated process to the layman. There is also a need to determine the exact role of the solar cycle on Earth. Moreover, certain variables such as the destruction of the ozone layer and the distance of the Earth from the Sun should be considered, as well as a more thorough investigation on the features of solar winds such as Magnetic Clouds and CRIs. However, as of now, one can see that solar winds, with the extremely high temperature and speed, can be destructive.

Impact of Asteroids on Earth.

An asteroid has been defined as a body that is smaller than a planet but larger than a meteoroid. An asteroid is in orbit around the sun just like all the other planets and they are found in the inner solar system. They are different from a comet in that a comet produces a visible coma while an asteroid doesnt. (Lewis 2000). It has been predicted time and time again that an asteroid will collide with the earth just as it happened during the time of the dinosaurs that consequently wiped out the giant lizards. The last asteroid to hit earth, 1908 Tunguska event, wiped out almost 2000 km squares of the Siberian forest and that is considered to be a small asteroid (Lewis 2000).
              
Today the issue has been discussed on the highest panel available, the United Nations General Assembly in New York. Scientists from all over the world have in more than one occasion come together to share ideas and data as to whether the earth is in danger of another catastrophic event such as the one that wiped out the dinosaurs. Such forums also act as fund raisers for the same asteroid projects on earth. This shows the commitment of the world to wards such matters (Lite and Battaghia, 2008).
                 
The Impact of an asteroid on earth would be tremendous the last great impact wiped almost all life forms on earth. An asteroid with a diameter of more than 3 miles would considerably do so much damage to our planet. To begin with, it would destroy and flatten everything within 1000 miles this means that if it lands on any city in the world to day it will completely destroy everything within 1000 miles from ground zero. This is significant damage. There will be an explosion that will incinerate all life forms within 1000 miles all round. The impact will raise a huge cloud of dust so thick that it will blot out the sun source of life on earth. The cloud will then spread and within days, the earth will be covered in a thick cloud of dust and perpetual darkness (Morrison 1999).
      
The plant life will be the first to die as they are dependent the most on sunlight, gradually all the other life forms will follow. Man will die due to lack of clean air as the world will be covered with a cloud of dust. We will probably have to go around using gas masks in order to protect our lungs organs from the dust. Eventually due to the ensuing chaos among other things man will also die. The environmental dust cloud might take years to settle down and by the time this happens all life forms but a few will be remaining and struggling to survive (Morrison 1999).
          
The effect of the impact therefore depends on the size and the energy the asteroid will have at the time of impact. An asteroid that is about 3 miles in diameter as in the case above will have millions of megatons of energy. The atomic bomb that fell on Hiroshima and Nagasaki only yielded 50 megatons of energy. So from this one can only imagine the amount of destruction that the smallest impact has on our planet.  A crater that was discovered in South America stands as proof of the power of an asteroid or a meteoroid has. The crater is about 180 kilometers in diameter and it has been linked with the extinction of the dinosaurs millions of years ago. From the last impact, it took the earth 30 million years to recover. That is the power of such an impact (Lite and Battaghia 2008).
                
How bad the impact will be is also dependent on the density and the composition of the asteroid. Is it made up of an alloy of metals or is it made up of rocks. The impact will also depend on the possibility of a split or break up into many pieces due to the friction that it will meet as it enters the earths atmosphere. The possibility of it splitting into many other pieces has some benefits and dad effects. The breaking into smaller pieces will significantly reduce the impact of the asteroid therefore reducing the impact of the larger asteroid and increase the chances of earth surviving the event with only small damages here and there. The splitting of the asteroid will also pose another problem. It will mean a number of targets all over the earth and therefore still a significant damage to the earth. It breaking up means that the pieces will land on lets say a number of cities and therefore there still will be a mass death and extinction (Lewis 2000).
          
In conclusion, an asteroid is a heavenly body that possesses a mighty power that might change the cause of the earth. History has recorded the power and catastrophe that such a body possesses. Fortunately research and history have shown that such events occur once in a very long time in some cases it occurs once in several millennias. The odds are therefore good that such an event will not occur during our life time. Still advances in this field of astronomy are welcomed, for we never know when we may be in danger.

Planetary Greenhouse Effect.

1. Runaway greenhouse effect

Earth, Mars and Venus share certain broad similarities as planets and yet their general climates are very different from each other. Venus has a surface temperature that can melt many metals, Mars is unbearably cold, while the planet earth enjoys a temperature that is just right for life. What accounts for such stark differences The answer is simple, it is the presence of the atmosphere and a particular phenomenon related to it known as the greenhouse effect that have caused the three terrestrial planets of the solar system to take completely different routes in their evolution.

Planetary bodies that are adequately dense and big in size have gravities strong enough to trap any gases that may ooze out of their depths to escape into space, thereby developing a layer of atmosphere. When the sunlight of visible spectrum impinges on a planet, much of it is reflected back. However, the reflected light is of longer wavelength and is categorized as infrared radiation. Though the atmosphere could be transparent to visible light, it tends to absorb infrared radiation. Certain gases that could usually be present in the atmospheres of earth-like planets absorb the reflected radiation because of their molecular structures (Nash, 2008). This phenomenon of trapping of solar energy is called greenhouse effect, and the so-called greenhouse gases, water vapor, carbon dioxide, methane, etc, are responsible for this. These gases increase the temperature of a planets atmosphere and make the planet hotter than it should otherwise be.

Sometimes, as in the case of the planet Venus, a positive feedback mechanism develops in the greenhouse effect of the planet, driving the temperatures far out of the normal range. This happens when large amounts of gases that were trapped in the surface layer or that may exist in liquid or solid form on the surface of a planet begin to be slowly released into the atmosphere due to an increase in temperature that could be explained by the normal greenhouse effect or some other cause. The addition of these greenhouse gases in the atmosphere increases the temperature of the planet, which in turn forces the release of more greenhouse gases, which further increase the temperature. The loop continues till all the greenhouse gases on the surface of the planet are released into the atmosphere, by which time the planet could have become a boiling furnace. This kind of runaway greenhouse effect happened on Venus, it could happen on the earth in future.

2. Climatic history of Venus and Mars

During the heyday of science fiction in 1940s and 50s, writers thought that Venus could be an exotic planet covered with lush vegetation. They imagined that in the future people from earth would go to Venus on holiday safaris. These fantasies were dashed, however, when Soviet probes to Venus in the 1960s painted a picture of a living inferno. Venus has a surface temperature of about 460C (860F). Its atmospheric pressure is 92 times greater than that of the earth. To explain these extreme temperature and pressure conditions, scientists hypothesized that billions of years ago, Venus and Earth shared similar climatic conditions with abundant presence of liquid water  however, Venus was subsequently subjected to a runaway greenhouse effect. To begin with, the basis difference between earth and Venus is that Venus is much closer to sun than earth. The original higher temperatures of Venus were able to cause a positive feedback to loop to set in.

Billions of years ago, the sun was much cooler than it is now. As the suns temperature increased, Venus water seas increasingly evaporated. Thus, though the suns heat stabilized after a time, Venus temperature went on increasing because of the increased water vapor in the atmosphere which evaporated more water into water vapor which absorbed more reflected radiation and increased the temperature of the planet and evaporated more water. This led to a point where oceans simply boiled away. Venus also had great reserves of carbon dioxide locked up in its rocks, just in the way earth has today. As the temperature of the planet kept rising due to water vapor feedback loop, the carbon dioxide slowly began to be released from the crust and supported the positive feedback loop. The rising temperatures of Venus went much beyond 100C because even after the oceans were melted, the carbon dioxide loop continued for much longer. Today, 96 of Venus atmosphere is made up of carbon dioxide (as contrasted to less than 4 of CO2 in earths atmosphere), but there is little water vapor left. This is because when water vapor rises sufficiently high in a hot atmosphere like that of Venus, it is exposed to ultraviolet radiation which splits H2O into hydrogen and oxygen gases, and then hydrogen would escape into space while oxygen would combine with other gases present in the atmosphere (Strobel, 2007).

The atmosphere of Venus continues to be studied the space probe Venus Climate Orbiter that is due to be launched later this year is expected to provide us more clues regarding the planets atmosphere and climate.
Mars is of course a favorite destination not only for SF authors but also for many scientists and enthusiasts, and not only for holiday purposes but also for permanent living and working. Decades ago scientists thought Mars had a very similar atmosphere and climate to ours, and although such imaginings were proven to be grossly optimistic, people continue to dream about Mars. Mars did not undergo a runaway greenhouse effect like Venus did, nor does it have a moderately warm temperature like Earth has. Mars surface temperature could go as low as minus 87C during the winter, although in summer it could be tolerably balmy. Mars has a wide range of temperature because it does not have a thick atmosphere like earth has, and consequently its temperatures are not moderated. The atmospheric pressure  density of Mars is only 1 that of the earth.

Mars too could have started like earth in terms of its atmosphere, climate and some other important planetary features. Earth has a huge magnetic layer around it stretching for thousands of kilometers into the outer space. This magnetic layer is generated by a rotating mass of molten iron at earths core. In the case of Mars, such a planetary dynamo somehow ceased to function very early on in the planets history, as a result of which it could not retain its magnetosphere. The stripping of Mars magnetosphere exposed the upper layers of its atmosphere to the radiation of the solar wind. Moreover, Mars is much smaller in size than earth, with a surface gravity of about 38 as that of the earth. Mars lesser gravity too did not help it retain its atmosphere.

Ironically, Mars too has about 95 carbon dioxide in its atmosphere, just like Venus. But this carbon dioxide is not effective in generating any significant greenhouse effect because it amounts to very little in actual mass or volume. Only a minor greenhouse effect is created (Miles, 2008). Nevertheless, Mars future could be very much tied up with the greenhouse effect. People of earth have great designs on Mars. The age of Mars exploration is just about to begin. If we are ever going to colonize the planet on a mass scale, we would have to subject the planet to a stupendous process called terraforming which aims at making Mars more earth-like. Terraforming of Mars could take at least a hundred years, and it initially involves releasing large amounts of greenhouse gases into the Martian atmosphere artificially so as to try to increase the temperature of the planet and make its atmosphere thicker, despite the lack of a magnetosphere to protect it (Bonsor, 2010). If any positive feedback loop develops in the effort of terraforming Mars by releasing greenhouse gases, it could spare a great amount of effort for people working on the project.

3. Greenhouse effect on Earth

In itself, the greenhouse effect has been a very beneficial process on earth, without which life could not have probably evolved on this planet. Earth has a remarkably moderate and stable climate that is just suitable for the flourishing of life on this planet. Nonetheless, in its billions of years of geological history, the planet has seen considerable fluctuations in its climate, presenting hostile climatic conditions to many of its inhabitants for long stretches of time. 

Earth has seen many extinction events in its long history, but the greatest of them was the Permian-Triassic extinction event that happened 251 million years ago when over 80 of the living species perished. One of the major hypotheses advanced to explain this catastrophe was a runaway greenhouse effect caused by increased volcanism and  or abrupt release of methane hydrates from the ocean floor, methane being a powerful greenhouse gas. Though such extreme climatic events were rare, warming and cooling cycles keep recurring on the planet. For example, there have been many long spells of ice ages in the past, and these deep freezes will continue occurring on the planet in future.

In the past few decades, the average temperature of the earth has risen by about 1C. Such fluctuations in earths temperature have not been uncommon in the historical continuum of the human civilization itself. If the current global warming is happening in the natural course of things, we can expect it to stabilize or even if it does not there is not much we can do to counteract it. However, since the advent of Industrial Revolution 200 years ago, humanity has become excessively dependent on fossil fuels the burning of fossil fuels releases carbon dioxide into the atmosphere and it is widely believed in our times that the ongoing global warming is anthropogenic, induced by the CO2 pollution caused by human activities. This is the majority view today, although there continue to be a large number of scientists and experts who contest the claim (Michaels, 2005). The fact is that water vapor is present in our atmosphere in vastly greater quantities than carbon dioxide, and the actual levels of carbon dioxide in the atmosphere have only increased by an apparently insignificant percentage in the last 50 years or so.

It would seem logical to conclude that 200 years of heavy pollution man has created could have begun to dramatically affect the global climatic system at the same time, it could also be that the global climatic system is highly resilient and the earths atmosphere as a whole may be too vast to be impacted by the side effects of human activities, notwithstanding the obvious and undeniable harm caused by pollution at a local level. Whether caused by humans or not, global warming is inherently a very dangerous phenomenon that is always likely to start a positive feedback loop that can go out of control.

4. Human response to global warming

Much more scientific evidence need to be gathered and many more studies conducted to determine the exact nature and extent of the current global warming. A great problem in the current trends about the debate about global warming is that they are only too often influenced by politics, economics and preconceived notions. In this matter, amazingly, even scientists who are supposed to study and express their findings objectively have become prone to prejudice. This tendency has to be dramatically curtailed if the public are to be informed about the truth of global warming on a purely scientific basis.

Meanwhile, we cannot wait for scientists to come out with a total consensus any time in the near future, because the nature of the subject is vast and complicated in itself. If the current global warming proves to be caused by humans, it may be too late to act at a future time. The time to act is now, and one of the most effective ways to deal with this crisis is to replace our fossil fuel as early as possible. Massive amounts of research needs to be conducted on the viability of alternative fuels, especially algae biofuels. Whether or not global warming is happening because of human doings, replacing fossil fuels entirely in the next ten years will get us rid of pollution, and make our cities cooler by annulling the temperature rise caused by the blankets of smog that cling around them.

Origins of Life on Earth.

The question of how life evolved from matter is one of the ultimate questions of science, and is far from being solved. It is even difficult to clearly answer the question, what is life Gerald Joyce, a NASA exobiologist, defined life as a self-sustained chemical system capable of undergoing Darwinian evolution (Hazen, 2006) Two fundamental characteristics that distinguish life from non-life are metabolism and reproduction. Life grows and sustains itself metabolism is needed for this. Life replicates itself and evolves  this involves reproduction. Life as we know it is based on carbon and our bodies consist of complex organic molecules. A class of organic molecules called amino acids are known as the building blocks of life. Amino acids that form proteins are essential for carrying out metabolism in living organisms. However, proteins are produced via the agency of nucleic acids or genes. Genes are needed for all kinds of duplication and reproduction in living cells. both proteins and genes are therefore essential for maintaining life.

The long process of evolution on earth traces itself back to single-celled organisms. Evidence suggests that life existed on earth as early as 3.5 billion years ago. More recently scientists have been able to push back the date up to 3.83 billion years ago (Eurekalert.org, 2009). Some kind of rudimentary life forms may have existed hundreds of millions even before that, though evidence may never be found for such primitive life forms bordering on inanimate matter. It would seem like somewhere around 4 to 4.4 billion years ago, complex organic molecules came together driven by pure chance, coalesced, and kicked into life. This is the traditional primordial soup theory of the origin of life. This was originally proposed by Darwin.

There are currently two major theories of origin of life, both involve the primordial soup with its complex molecules but in different degrees. The first theory suggests that the complexity of the randomly self-arranging organic molecules became ever greater until it reached a state where complex metabolic processes could be sustained within it. The second theory suggests that simpler forms of metabolism started at an intermediate stage of complexity of molecular organization and drove the level of molecular complexity steadily higher (Schirber, 2006).

1a) Genes-first  RNA World theory

The assemblage of amino acids in the primordial soup could have taken place in some type of extreme conditions that prevailed on the early earth. A question arises here as to whether the first highly complex molecules of life were proteins or DNA (Deoxyribonucleic Acid). In living organisms, proteins and DNA are intricately tied up with each other DNA can replicate itself only by means of proteins, and proteins can be built only on the basis of the blue-print provided by DNA. DNA needs proteins, proteins need DNA  but RNA (Ribonucleic Acid) can replicate itself. The possible role of RNA in the origins of life was first suggested by Francis Crick, the co-discoverer of DNA. RNA could conduct metabolism and carry out self-replication on its own. How could RNA that could catalyze its own replication have spontaneously formed from the amalgam of complex organic molecules And what was the likelihood of its happening The RNA World hypothesis concerns itself with these questions. It postulates that before life originated on the earth, the planet could have been rich in RNA material which worked as a substratum from which precellular and cellular life forms evolved. RNA-based catalysis and information storage were the first steps for the emergence of what we can recognize as life. From the RNA world evolved the more familiar protein-and-DNA world of lifes evolution on earth. DNA has greater stability than RNA, and has taken over the information storage function of RNA, while proteins which are much more efficient and flexible than RNA in the process of catalysis took over RNAs metabolic function.

1b) Metabolism-first theory

Objecting to the Genes-first  RNA world theory, the metabolism-first theory of the origin of life states that a tremendously complex molecule such as RNA could not have emerged out of random conglomeration of organic molecules the theory instead proposes the initial emergence of metabolic cycles that could have been carried out even without very complex protein and genetic substances. A conglomeration of molecules much simpler than RNA could have first developed the ability to generate or absorb energy and maintain their organization via metabolism. Gradually the complexity of the metabolic processes could have developed and paved the way for the rise of the RNA world. Instead of molecules coming together and growing in complexity and generating metabolism once a level of complexity is reached, according to the metabolism-first approach metabolic processes kept growing in complexity and formed the basis of the eventual synthesis of nucleic acids.

2a) Evidence for Genes-first theory

It would be very difficult or impossible to gather any direct evidence for the origins of life. Fossil records underpin our knowledge of the course of evolution on earth, but the likelihood of gathering such clear indications regarding the mode of the origin of life on earth is very little. We can only favor a theory largely depending on the convincing power of its logic, and the ease of its demonstration and replication in a laboratory simulating conditions of the early earth. The primacy of RNA in the emergence of life on earth can be prima facie supported on the basis of RNAs ability to both store information and act as a catalytic agent  an enzyme or a protein  in chemical reactions. In theory, parts of the RNA molecule could have been spontaneously synthesized in the conditions that prevailed on the primitive earth with relative ease.

Experiments have corroborated this notion. Short strings of self-replicating RNA molecules have been produced artificially (Johnston et al, 2001). However, these were not generated spontaneously from simple inorganic molecules as were the complex organic molecules in the classic Miller-Urey experiment of the 1950s. It has been shown that smaller strings of catalytic RNA could concatenate by themselves under the right conditions and form into self-replicating RNA. Darwinian-type natural selection could have operated in the realm of RNA that existed on the primitive earth. The self-catalyzing structures which were more efficient at reproducing themselves could have proliferated leading to the eventual emergence of a full-fledged RNA molecule. Also, there are certain identified RNA enzymes which are self-sustaining and self-replicating.    

2b) Evidence for Metabolism-first theory

The support for metabolism-first theory is based on the fact that spontaneous synthesis of RNA in the laboratory was not effectively demonstrated, and to the extent it was achieved it was accompanied by serious difficulties. That is to say, even when some kind of spontaneous synthesis was achieved it involved contrived conditions that were highly unlikely to have existed on the primitive earth. Also, in conditions of spontaneously growing complexity even if some set of molecules were heading in the right direction, they would be cut by other molecules long before they could achieve any semblance of RNA. Therefore some scientists felt the need to support the notion that metabolism could have first occurred in molecules much simpler than RNA. Envisaging smaller and simpler molecules interacting with other in a closed system of chemical reactions within some kind of membrane is easier than envisaging the miraculous appearance of RNA all by itself. Also, we can conceive these chemical interactions to be gradually growing in complexity without taxing credulity, say the proponents of metabolism-first theory. These gradually growing chemical reactions could have produced more complex molecules over time.

Incidentally, metabolism-first models have been first suggested in 1920s, many years before DNA was discovered. In the 1980s and 90s several different RNA-alternative models were proposed for the occurrence of complex chemical reactions in a metabolic cycle, each associated with its own degree of plausibility.

An undersea microbe which is also found in a variety of other environments, Methanosarcina acetivorans, can be used to support this theory. This Archaeon eats carbon monoxide and ejects methane and acetate.  Many existing oldest forms of bacteria on earth can convert carbon monoxide into methane, but this microbe can additionally produce acetate. It does this with the help of two common enzymes. In undersea environments where there is the presence of the mineral ironsulfide, the expelled acetate reacts with the mineral forming a sulfide-containing compound known as acetate thioester. The microbe ingests this compound and again breaks it down to acetate creating ATP (adenosine triphosphate) in the process. Living organisms derive energy from ATP. One of the enzymes in the microbe can even synthesize ATP directly. This is perhaps the simplest self-sustaining metabolic process happening on the earth, requiring just two simple proteins or enzymes. This process was discovered by Biologist James Ferry and geochemist Christopher House in 2006. It very much bolsters the plausibility of metabolism-first theory (Mason 2006).  
3) A theory of personal preference
The objections laid against the RNA world hypothesis by the proponents of metabolism-first approach are quite valid. The replication of RNA is far too complex process to have developed spontaneously within the first 500 million years of earths formation. The early earth was an exceedingly violent and tumultuous environment bearing little resemblance to the latter-day planet. It was not a warm, exotic haven maintaining the right conditions for complex organic molecules to bounce around and self-organize themselves into forms of ever increasing complexity.

Until recently it was believed that even if protocellular life could form by itself, it could not have withstood the intense meteoric bombardment that the planet was subjected to in those times, especially around 3.9 years ago. However, last year it was found out in NASA experiments that the bombardment would not have sterilized the earth completely, however intense it might have been. Therefore scientists now suppose that proto-life could have occurred in some deep and extreme environments of the planet from removed from the violent surface, as early as 4.4 billion years ago, soon after the moon formed during a colossal collision of the primitive earth with another same-sized planetary body. Simpler metabolic processes must have developed in complex organic compounds for 600 to 800 million years before the stage of RNA World was reached.

It is indeed possible for duplication and transmission of characteristics to occur in molecules known as compound genomes or composomes, which are far less complex than RNA or DNA. These genomes could have acted as some kind of metabolic systems which gradually grew in complexity and created a pathway for the formation of the first protocell. However, just two months ago NASA scientists demonstrated that Darwinian type evolution could not occur in molecules that were less complex than RNA and DNA, after carrying out rigorous analyses (physorg.com, 2010). This finding controverts a fundamental assumption of metabolism-first proponents that metabolic cycles could gradually increase in complexity. NASAs finding pushes us back to RNA world theory, but the degree of implausibility associated with RNA strands emerging spontaneously out of purely random coalition of molecules in the extremely chaotic environment of primitive earth still remains very high.

Some scientists believe that certain forms of RNA could have occurred on the planet over 4 to 4.4 billion years ago however, an important point to consider about this scenario is that except for the existence of liquid water, earth was not very different from other terrestrial planets in the solar system and millions of other planets elsewhere in the universe. If RNA could promptly emerge on the earth all by itself very little time after the planet solidified into existence, it could have also formed on millions of other planets and equally well led to life on all these places. But this does not seem to be the case. Spontaneous formation of RNA is not easy and may need several billions of years trial and error in stable environments with sufficient energy sources and not just a few tens or hundreds of millions of years in an intensely turbulent environment.

This consideration leaves us with the possibility that RNA molecules or even primitive cellular organisms could have evolved on some other distant planets over billions of years and could have subsequently drifted in cosmic debris toward our solar system carrying seeds of life with them. If the great interstellar distances pose a problem, we can alternatively picture another scenario the entire solar system formed from the debris of a previous supernova, primitive life forms could have developed over billions of years of evolution on some planet of our progenitor star that exploded and may have survived the explosion. Therefore primitive life could have existed in frozen forms in the very material from which the solar system formed, and later on it could have taken root on the earth. On the other earth-like planets of the solar system, Venus and Mars, it could have been sterilized. 

Last year, NASA scientists identified an amino acid, glycine, in the material sample obtained from Comet Wild2 during its 2004 encounter with Stardust probe. This has corroborated the widely held belief that comets can typically contain complex organic molecules. More of such discoveries in the future can fairly substantiate the theory of extraterrestrial origin of life on earth. The metabolism-first approach seems to have hit a dead-end with the pronouncement of NASAs latest verdict on the issue. Future research on the genes-first hypothesis may only confirm how unlikely it is for RNA molecule to spontaneously evolve under very rough conditions within a relatively very short period of time.  But any discovery of primitive life forms anywhere other than the planet earth can revolutionize our conception of life and how it could have originated on earth.

Thursday, November 21, 2013

The Star Furud.

The star Zeta Canis Majoris, notably known as Furud, draws its name from the Arabic vocabulary Al-Furud. Which means the bright single ones or solitary ones It is also known as the Ape which refers to the surrounding tiny stars (Allen, 1889). This paper will centre on the star Furud, facts about it, and how the star and the constellation that it resides came about their names.

Furud is a third magnitude star, sandwiched between the Greater dogs bottom triangle, the dove (Columba) and Adhara. Furud finds itself in close proximity to the Milky way making it dimmed by interstellar dust particles. This necessitates a correction to be made in order to calculate its luminosity. When this is done and a further correction is made with regard to ultra violet light radiated at 27500 Kelvin we obtain its luminosity as 4020 times that of the sun. This results in a solar radius of 4.6. The star has a rotation period of less than 9 days this is due to equatorial rotational speed of 25 kilometers per second.  Stellar composition theory, leads to a mass almost eight times that of the sun, and depicts the star to be more or less half way through its hydrogen fusing lifecycle of just 32 million years.

There are rumors that the luminosity of Furud varies although no evidence is forwarded to justify this. Perhaps its proximity to the Milky Way may have something to do with this. Despite its name suggesting Furud to be a solitary star, the star is a spectroscopic binary that has a low mass companion which shifts the star front and back as the two orbit over a period of six hundred and seventy five days which is approximately equal to one year and eight months.

Furud is not a solitary star. It has a lower mass spectroscopic companion that has it shift back and forth resulting in the pair orbiting over a period of 675 days. When a guess of two solar masses is considered it gives a separation of 3.2 million astronomical units (60 Suns distance from Jupiter), a soaring eccentricity of 0.57 shifting the stars between 5.1 and1.4 Astronomic Units apart. Furud is dangerously close to the mass threshold above which a star explodes in to supernovae but there are chances that one day it will form a planetary nebula followed by a gigantic white dwarf. Its proximity to the Milky Way makes it to be a bit dim by interstellar dust, that is, by 0.16 magnitudes (Kale).
T
he constellation, within which Farud resides, is known as Canis Major there are numerous stories that have been told and written with respect to this constellation. Some of them are highlighted below
CANIS MAJOR The Great Dog. This constellation is average in size, what makes it popular is the fact that it is home to the brightest star, Sirius, the Dog Star. (Sasaki 32). From the ancient times, Canis Major is considered one of the Dogs that the Giant Orion took along to his hunting escapades. However, some people claim that the dog attained its name in honor of given by Aurora to Cephalus considered to be his most flexible species. The Legend states that Cephalus raced the dog against a fox which was applauded to be the fastest among all animal species. They raced for a considerable amount of time and none of them emerged a victor, Jupiter was bowled over   by the speed displayed by the hound that he decided immortalize the dog and offered him among the stars. Another story however, asserts that it was Icarius dog. The story is different among the Scandinavians who regard the Dog as Sigurds while the ancient Indians knew it as The Deer slayer.

Arabic astronomers called the constellation, Al-Kalb-Al- Akabar which means the greater Dog, and among the ancient Christians the star, Euphratean star list, Canis major is designed as the Dog of the Sun., the Christians perceived it to be Tobias Dog or St. Davids (Olcott, 95).

According to Speer, Furud was a dancing star who danced with joy sending all over the sky and communicated love messages to those below. But unlike others, the star Furud was lonely. Despite all the brightness in him, there was a vacuum inside him that could never be filled no matter how vigorously he danced. The love Furud  showered to others   through  his dance was never reciprocated and for this reason he grew dull day by day until he could not do it no more, he desperately summoned the last brightness left in him converted it into a spirit kiss sent it to the air from where it descended to earth, Once it touched the earth, the kiss converted into a gentle breeze that touched people and animals lives with love But his love was still never reciprocated, thus the sadness and loneliness in him did not cease. Then one day Furud found a beautiful maiden and blew his spirit kiss all over her, immediately she recognized the spirit was different and this made Furud overly excited. That is the legend behind how the star Furud attained its name.

Photosphere.

The Photosphere is lowest layerzone located in the atmosphere of the Sun. It emits the light that is seen by the naked eye when we look up at the sun. The photosphere is just about 300 miles (500 kilometers) thick however the light that we see mostly comes from lowest part of its layer which is  about 100 miles (150 kilometers) thick.  This part is often referred to  as the suns surface. At the lowest layer of the photosphere, the temperature is 6400 Kelvin, at the top it is around 4400 Kelvin.The photosphere consists of numerous granules, which are the tops of granulation cells. A typical granule exists for 15 to 20 minutes. The average density of the photosphere is less than one-millionth of a gram per cubic centimeter. This may seem to be an extremely low density, but there are tens of trillions to hundreds of trillions of individual particles in each cubic centimeter.

Chromosphere

After the photosphere the next zone is the Chromosphere. Its main characteristic is a rise in temperature, which reaches just about 10,000 Kelvin in some places and 20,000 Kelvin in others.The Chromosphere was first detected during total eclipses of the sun. The spectrum  of the chromosphere is visible after the moon covers the photosphere, but before it covers the chromosphere. This time frame lasts only a few seconds. The emission lines in the spectrum seem to flash suddenly into visibility, so the spectrum is known as the flash spectrum. The chromosphere is apparently made up entirely of spike-shaped structures called spicules (SPIHK yoolz). A normal spicule is about 600 miles (1,000 kilometers) across and up to 6,000 miles (10,000 kilometers) high. The density of the chromosphere is about 10 billion to 100 billion particles per cubic centimeter.

Transition region

Between the two zones of the chromospheres which ranges to about 20,000 Kelvin and the Corona which is hotter due to its temperature being  500,000 Kelvin is a region of intermediate temperatures which is known as the chromosphere-corona transition region, in simpler terms - the transition region. The transition region receives a lot of its energy from the overlying corona not only that the region emits most of its light in the ultraviolet spectrum. The thickness of the region varies from a few hundred to a few thousand miles or kilometers. In some places, relatively cool spicules extend from the chromosphere high into the solar atmosphere. It has been theorized that there maybe nearby areas where coronal structures reach down close to the photosphere.

Corona

Corona is the hottest part of the suns atmosphere wherein the temperature exceeds that of 500,000 K. The corona consists of structures such as loops and streams of ionized gas. The structures connect vertically to the solar surface, and magnetic fields that come from inside the sun mold them. The temperature of any given structure created by there methods varies along each field line. Closer to the surface, the temperature is typical of the temperatures found in the photosphere. At higher temperature levels, it has chromospheric values, then values of the transition region and then finally coronal values.

The temperature of the part of the corona nearest to the solar surface  is about 1 million to 6 million Kelvin, and the density is about 100 million to 1 billion particles per cubic centimeter. The temperature reaches tens of millions of Kelvins when a flare occurs.

Sunspots

Sunspots are dark, often roughly circular features on the solar surface. They form where denser bundles of magnetic field lines from the solar interior break through the surface.

The Sun in our solar system has been burning brightly for the past 5 billion years however its rate of burning through its nuclear fuel has not been a steady one.  The Sun actually goes through two phases a quiet phase and an active phase the only difference between the two is that during the suns active phase it release only 1 more energy than when its in its  quiet phase. An easy way to tell when the sun is going through an active or quiet phase is to look at the amount of sunspots evident on the suns surface. A large increase in sunspots indicates that the sun is undergoing its active phase while relatively few sunspots is indicative of a quiet phase.

One affect this has on us is that even though during its active phase the sun releases only 1 more energy it is sufficient to cause a drastic change in the atmosphere of our planet. A 1 increase in the amount of energy reaching us from the sun actually causes a warming of the ozone in the upper atmosphere. All this extra energy actually causes the production of more ozone which traps more heat and create even more ozone in a rather cyclical cycle. The result is stronger winds which reduces the amount of cloud cover over the Pacific Ocean which would then cause it to absorb more energy from the sun.  The result would be a warming from the sky and from the sea thus increasing the overall temperature of the planet.

Why Pluto is No longer a Planet.

In 2006, the International Astronomical Union (IAU) made a decision to remove Pluto, which from the time of its discovery in 1930 had been considered a planet, from the list of planets and re-classified it to be a Dwarf Planet. This move, as has been explained by Seeds and Backman (2009) was made based on changes made by the IAU on the criteria that an object should meet in order for it to be defined as a planet. It was also based on existing similarities between Pluto and other objects in the Kuiper belt (p. 191)
This article is therefore an attempt at analyzing the criteria that was used by the IAU to demote Plutos status. It will also discuss Plutos current classification as a Dwarf planet and attempt to identify and discuss other bodies of the same status.

Why Pluto is No longer a Planet
During the period before 1930, the solar system was believed to consist of eight planets with Neptune being the last planet in the system. Despite there being suspicion of the existence of post-Neptunian objects, and there having been a number of close calls, this status remained until February 18th 1930 when 22 year old Clyde Tombaugh, an American astronomer discovered what later became known as planet Pluto (Weintraub, 2007 137).

Controversy over Tomaughs discovery being really that of a planet began soon after the announcement.  Questions started to emerge over the size of the planet and its inability to exert significant gravitational influences. This was especially because by the 1935 the size of the planet together with Charon its largest moon had been reduced to almost one five-hundredth the size of the earths moon.
Figure 1 below shows Pluto together with its moon Charon, Nix and Hydra. Charon, which is Plutos largest moon together with Pluto weigh less than the earths moon.

Figure 1 Pluto and its moons

The planets size had been predicted at its discovery to be 6.6 Earth masses. It however turned out in the late 1930s to range between 0.1 to 1 Earth masses. Its smaller size therefore implied that Pluto is unlikely to have produced the gravitational effects on Uranus and Neptune that had led to the prediction of its existence in the first place. 

These doubts were further compounded by the discovery of the Kuiper belt in the 1900s. The Kuiper belt, named after astronomer Gerald Kuiper, consists of comet-like debris that forms around the edge of the solar system (Hamilton, 2002). The debris objects on the Kuiper belt share numerous similarities, further buttressing the argument that Pluto may not really be a planet.

First off, Pluto is composed of icy material like comets in the Kuiper belt. This is unlike the other eight planets in the solar system which are either rocky or gaseous. The eccentricity of Plutos orbit also raises questions over its planetary status.  For twenty years of its 249 year orbit, Pluto is closer to the Sun than Neptune and during this period, it becomes the solar systems eighth planet. This irregular orbit is shared by other objects in the Kuiper belt.  These objects therefore orbit the sun twice while Neptune orbits the sun three times.  These Kuiper belt objects are called Plutinos.

The Plutinos are caught in resonances with Neptune because as Uranus and Neptune migrated outward after their formation, Neptunes orbital resonances swept up small objects that resulted in the Plutinos getting caught up in the 32 resonances while other Kuiper belt objects are caught up in other reisonances.
On August 24th 2006, astronomers meeting in the Czech Republic during the International Astronomical Union (IAU) General meeting voted that Pluto does not meet the criterion for a fully fledged planet. This followed the discovery in January 2005 of Xena, the largest dwarf planet. This discovery forced the IAU to reconsider its definition of what a planet is and whether or not Pluto fit into the definition.

In the discourse regarding whether or not Pluto is a planet, it is imperative to first identify the criteria that a body must satisfy in order for it to be defined as a planet. According to Wentraub (2007), the classical definition of a planet was that planets are objects that are too small to generate energy through nuclear fusion but still large enough for them to spherical. The object must also have a primary orbit around a star (p. 185).

While the second criterion is quite clear, the first one requires a more in-depth analysis. The ability to generate energy through nuclear fusion is what distinguishes a planet from a star. A star is made up of hydrogen and helium which through the proton-proton chain process that fuses four hydrogen protons into one helium nucleus transforms small amounts of mass into energy. In this process, intense gravitational pressure and temperatures cause hydrogen nuclei to fuse with the helium causing the transformation of matter into energy. As the density of the gas increases, so does the pressure heating up the core and eventually nuclear fusion occurs.

Planets on the other hand cannot generate a nuclear fusion reaction and thus do not emit energy. Some planets like Jupiter and Saturn are made up of hydrogen and helium gas but cannot be classified as stars since they are not massive enough to generate the high amount of pressure and temperatures required to trigger a nuclear fusion reaction.

Secondly, this criteria implies that a planet should be large enough that its shape is not determined by its molecular and intermolecular forces but by the force of gravity.  It is gravity that shapes the planet into a sphere (p.186).

Pluto orbits the sun. It is also big enough to have enough gravity which has made it spherical. While this may qualify it as a planet, a number of arguments including the ones mentioned earlier have emerged to claim otherwise.  Astronomers have argued that in order for a star to be defined as a planet, it should not share its space with other objects (p. 109). Pluto is too small and therefore does not have enough gravity to clear other objects from its orbit. It however shares its characteristics with other objects in the solar system that have been classified as Dwarf planets (Boyle 2009).

According to Boyle (2009), after lengthy deliberations and discussions, the IAU made a number of changes to the classical definitions of objects in the solar system that in effect demoted Pluto from being a planet. The first and most important amendment in definition was that of a planet.

According to the IAU, planets are celestial bodies that are in orbit around the sun, have sufficient gravity to assume a hydrolic static equilibrium that make them spherical and have cleared the neighborhood around their orbits. Figure 2 below shows the eight planets in order of proximity to the sun.  Mercury, Venus, Earth and Mars are the rocky planets while Jupiter, Uranus, Saturn and Jupiter are gaseous planets with Jupiter taking its place as the last planet of the solar system.
Figure 2 The eight planets in order of proximity to the sun

The IAU defines Dwarf planets as celestial bodies similar to planets that are found in orbits around the sun. They have sufficient mass for their self gravity and for them to assume a nearly round shape. On the contrary, they have not cleared their orbit and they are not satellites. All other objects apart from satellites are therefore referred to as Small Solar System Bodies (p. 216).  This then categorizes Pluto as a Dwarf planet.
In order to fully comprehend this new categorization, it is important to first see how this came to be.  As had been earlier mentioned, from its discovery, Pluto has been riddled with controversy.  It was however not until 1992, when astronomers started discovering other objects in Plutos neighborhood that even more serious questions were raised. It was discovered that Pluto is neighbored by numerous icy bodies the size of asteroids. This constitutes the Kuiper belt within which Pluto is found.

According to Philip and Philip (2006), Mark Brown discovered in 2005 an object within the Kuiper belt that is much larger than Pluto. The object, 2003 UB313 or Xena, being larger than Pluto was also assumed to be a planet.  Based on this, other objects in the Kuiper belt could also be considered to be planets.
The IAU did not accept this broad definition of planets and resolved to classify these objects that are distinguished by their failure to clear the neighborhood around their orbits as Dwarf planets.  Pluto fits into this category of Dwarf planets because it has a weak gravity that cannot enable it to clear out its neighborhood on the Kuiper belt either by taking in or pushing aside competing objects (Philip  Philip, 2006).
Figure 2 below shows the solar system with the planets and dwarf planets. The figure shows the dwarf planets as being significantly smaller in size than other planets in the solar system. Eris is seen to be the largest dwarf planet.

Lichtenberg (2007) has noted that Ceres, which is also the largest asteroid, coexists with thousands of other asteroids in the asteroid belt which is found between the orbits of Mars and Jupiter. It has a mass of about 16000 that of the earth (p. 253). According to Boyle (2009), Ceres is widely thought to be made up of a rocky core and a mantle consisting of ice that is covered up by an outer crust of dust and clay.
This composition of Ceres has made many scientists to believe that it may be an embryonic planet.  This means that its development was put on hold before it could become a planet (p. 172).  Ceres has therefore, based on its composition for a long time been theorized to be harboring water and therefore may have life.
Eris, another dwarf planet in the solar system was discovered in 2005 by an American astronomer, Mike Brown. It was formerly known as 2003 UB313 and nicknamed Xena after the Greek goddess of conflict.  It was later officially named Eris. This dwarf planet is larger than Pluto and has twenty seven percent more mass. It is also three times further from the sun than Pluto (Shipman, Wilson  Todd 2007 p. 460). When Eris was first discovered, it was declared the solar systems tenth planet due to its size (Esnworth, 2009 p. 99).
According to OLeary (2009), Eris orbits the sun in a region beyond the Kuiper belt. This region is referred to as the scattered disk. Objects within the scattered disk are thought to have originated from the Kuiper belt and were ejected by the gravitational influence of Neptunes outward ejection.  Eris has one moon that is called Dysnomia (p. 66).

Little is known about the other two dwarf planets in the solar system. These are Makemake and Haumea. Makemake is named after the fertility god and creator of the Ropanui people.  It was declared a dwarf planet in July 2008. Makemake is a lot like Pluto and has its surface covered with frozen methane.
Haumea on the other hand is named after the Hawaiian goddess of fertility and childbirth.  This dwarf planet is cigar shaped with a four hour rotation period and a surface made up water ice (Koupelis, 2010292). There could be forty or more other dwarf planets in the solar system that are yet to be discovered.

The demotion of Pluto from its planetary status to that of a Dwarf planet was controversial and divisive of the Astronomers community with some seeking for the decision to be overturned. It was also not very well received by the public largely on sentimental grounds. Most importantly, it has brought forth important questions that present a challenge to the AMU. One such question is whether it is right for planets to be judged based on their size. Pluto and other dwarf planets have missed the mark of planetary distinction based almost solely on this criterion, is this enough

These questions will likely be answered if and when NASAs New Horizon and Dawn space missions reach Pluto with its moons and Ceres respectively, as they have been planned in 2015. The success of these missions will give insight into the nature of post-Neptunian objects like Pluto and other objects on the Kuiper belt.

Is interstellar travel possible What would be the goal.

Interstellar travel is the travel between stars, either in a manned spacecraft or an unmanned spacecraft. There have been many theoretical approaches to this concept. There is a possibility of an unmanned travel but the concept of manned travel will take time to kick off because of the difficulties and the travel time involved. Here we discuss the possibilities and the pros and cons of Interstellar Travel.  

Interstellar travel has been the subject of discussion for decades. Science fiction always showed us the way of interstellar travel but these are still fiction. Warp Drives and hyperspace Engines have been the fantasy for the readers of science fiction literature and cinema, but still remain a fantasy. The practical implementation of these devices still isnt in the reach of todays technology and requires far more advanced levels of science and technology.

In the science fiction literature there have been methods where the scientists used generation-ships for interstellar travel. These mean that the crew of the ship lived and die on board and one of the generations will reach the destination. There have been theories where this was stated to be possible but at present is still a distant dream. This is mainly due to the difficulties involved in staying in the space for a large amount of time and the effects of those travels on human health.

There also have been theories that stated the use of concepts such as travelling in sleeper ships. These ships have a mechanism in which the travellers are put in a state of suspended animation. Suspended animation means that the passengers will be put in an inert state and will be put to rest. The maneuvering of the ship will be done by the computers on board. Once the ship reaches the destination, these people will be awakened with their age remaining the same since they started the journey.

All these theories still lack logic because there is no way for a human to be put in a state of suspended animation. The level of technology that is required to do that is still not reached. There have been numerous scientists who have written various papers on this matter and most of the papers still state that manned interstellar travel is still a dream. There is a possibility to achieve the goal by increasing the speed of the spacecraft.

Increasing the spacecraft speed would require the use of advanced propelling systems. There has been a good amount of research done on the concept of the use of nuclear engines. But the dangers posed by the nuclear radiation just are too damaging to even think of using such a mechanism. Even by the use of these engines and other technological advancements interstellar travel still will take years of time larger than a single human life span.

There have been many probes to the distant planets of the solar system such as Uranus and to the dwarf planets such as Pluto. These have taken considerable amount of time and some space crafts have stopped functioning before they reached the boundaries of the solar system.

One must then think of the pros and cons before such a travel is attempted. There may be the possibility of an unmanned probe. The nearest star to the solar system is the Proxima Centauri. This is about 4.3 light years away from the solar system. The fastest ever outward probe was Voyager  I. this is now moving at a speed of 10.5 milessec away from the solar system. This is about 118,000 times the speed of light.

Voyager  I was launched about 32 years ago in September 1977.  As of August 28th 09, this craft is about 10.312 billion miles away from the sun. This is 110.94 Astronomical Units. At this speed the journey to the nearest star Proxima Centauri will take about 72,000 years. This spans a 1000 generations and is virtually impossible at the technology that we have. This journey can be reduced by the use of nuclear pulse propulsion. 

Nuclear pulse propulsion is a theoretical concept that is still to be tested in a large scale. Nuclear pulse propulsion is achieved by having a series of nuclear explosions at the rear of the space craft that would accelerate the space craft to really high speeds, as much as 5.4107 kmhr. this is 5 of the speed of the light. At this rate of acceleration the space ship would be subjected to high G-Forces. This will virtually not sustainable by man.

There was a project that was designed by NASA called Project Orion. This was based on the nuclear pulse propulsion technology. There are three types of nuclear pulse propulsion systems. One is thermonuclear Pulse propulsion, second is the atomic fission pulse propulsion. The third is the Matter-Antimatter Pulse propulsion, which is completely a theoretical concept.

The thermonuclear fission can reach speeds up to 8-10 of the speed of light. The atomic pulse propulsion will reach up to 3-5 of the speed of light. The theoretical matter-antimatter drives can achieve 50-80 of the speed of light. The speeds at which this propulsion system can accelerate the craft are very high. The thermonuclear Pulse propulsion can reach Proxima Centauri which is at a distance of 4.23 light years in 44 years. Here is a table that shows the distances of the nearest stars from our solar system.

The Orion spacecraft was designed in a manner where small nuclear explosives were shot from the rear of the spacecraft and exploded about 30m behind. The explosion energy provided momentum to the spacecraft pusher plate. This pusher plate was coated with graphite oil to prevent damage. The pusher plate then transferred the momentum to the shock absorbers that transformed the sudden push to a gentle push that gave the ship 1g acceleration.
    
This design was lauded and criticized by many. The main problem this design faced was nuclear fallout. This was due to the radioactive debris that will fly out of the rear of the spacecraft that will spread in all directions in the space. This was the most serious of all problems. There were many theoretical propositions that claimed to reduce the fallout to negligible levels, but none could be tested anywhere near the vicinity of the earth.

Fig SEQ Figure  ARABIC 1 Orion Space Craft

The Project was apparently called off due to the signing of the PTBT (Partial test ban treaty). There were more projects that were thought of. These include Project Daedalus and Project Longshot. But these projects never passed the drawing board and are still on paper. These require significant advances in technology and science.

Project Daedalus required advanced technology and Project Longshot was designed using the present technology and used Nuclear Fission Reactor for the propelling system, and was slated to achieve speeds up to 4.5 of the speed of light and was slated to reach Alpha Centauri B in about 100 years. The project has its own issues due to the use of nuclear fuel. If there is a way that will minimize the radiations and their effects or even nullify them, we have a really great way of utilizing the nuclear weapons in a constructive manner.

Fig2 An artists conception of the British Interplanetary Society design for Project Daedalus

There have been many debates as to whether the interstellar probes are really necessary especially with their long travel durations (at todays technology) and the significant problems these probes present. There may be a case where a probe is launched with todays technology and might be overtaken by a later probe that will be launched with significantly advanced technology.

There are debates which state that a probe that cant reach the destination in 50 years or less should not be started at all. This is partially correct because if there is a significant technological advancement in the future that may overtake a present probe, the older probe will be rendered useless. This involves millions of dollars of investment. So scientists advocate the design and implementation of better propelling systems with the investment of those dollars rather than sending a space craft on a mission that will take thousands or hundreds of years to reach its destination.

The discoveries that an interstellar probe may find are many. But considering the difficulties in the launching and deployment of the space craft, the investment of the money in designing a better space craft might sound better. This is because there are more space explorations that are more interesting and plausible than an interstellar probe. Seriously an interstellar probe at this time and with this technology is a little awkward. If we can wait for a few more years, the advancements in the technology will enable us to build better systems and perhaps the problems can be solved effectively.

Interstellar travel is certainly possible, but with the current technology a manned mission is impossible and an unmanned probe may take many years to finish the journey. NASA is trying to do some important research space ship that uses the Light Sails as the propelling systems. This technology is of great importance and has lot of advantages. This may be the future of the propelling space crafts into space in the near future.

Geoffery A. Landis, of NASAs Glenn Research Center, says that a laser-powered interstellar space craft that uses light sails could possibly be developed within fifty years, using new methods of space travel. I think that ultimately were going to do it, its just a question of when and who Landis said in an interview. Rockets are too slow to send humans on interstellar missions. Instead, he envisions interstellar craft with gigantic sails, propelled by laser light to about one-tenth the speed of light. It would take such a ship about 43 years to reach Alpha Centauri, if it passed through the system. Slowing down to stop at Alpha Centauri could increase the trip to 100 years.

The goal of the future interstellar probes must be to seek greater knowledge of the universe and to use that knowledge in a constructive manner. These probes must also try to find any planet that can harbor life, just like our earth and they must look for any Extra Terrestrial Intelligence. This will help our cause and will usher us into a new era of space technology.

Just few decades ago, a trip to America from Australia could take well about 9-13 months via the sea route. Now it is scaled down to the matter of hours by aircrafts. People then only had the dream to fly, but wright brothers made it a reality. People also ridiculed the thought of sending satellites into the earth orbit. But now that is a common phenomenon. Science has answered each and every problem of man in a way it could. It is up to us to use the science in greater interest of mankind.

Similarly space travel can seem a bit ridiculous now, but as they say history repeats itself, we might just see NASA or a combined effort of the space competitors launch a manned mission to Alpha Centauri in a few decades. The problems space travel poses are many. There has to be a way man can endure the un-earthly life during the journey and in the alien atmosphere of the destination.

There will also be a way that will reduce the round-trip delay of information transmission. For suppose we reached Alpha Centauri, an information broadcasted there will take 4.23 years to reach the earth. There may be lot of changes that might have to be made in order reduce the time.

There also has been a wide spread discussion on the near light speed travel. Travelling faster than the light is theoretically impossible and will violate the laws of physics. Near light speed has been achieved in particle accelerators in CERN. Those particles are sub-atomic particles like positrons and electrons. These accelerators require lot of energy and investment. Translation of that acceleration to larger bodies like space crafts is impossible now.

But as science and scientists have been repeatedly turning possible to impossible, we can hope that this problem is resolved and we can also travel into space in lesser time and greater payloads can be taken along. It may be a few centuries away where we can start to colonize the outer space and start living there, but as of now it is only possible in dreams and in fiction.

This does not mean we cant send an unmanned probe to the nearest star at this point of time. We have achieved travel to Pluto and further regions of space and Voyager-I is already in the far end region in the solar system. It is scheduled to travel out of the solar system in a few years. This has been achieved in a few years (the craft was launched in 1977).

Use of Artificial Intelligence is a big plus in the Interstellar probes. If we achieve the propelling system that can be used to travel in lesser time, we can use Robots to control the equipment and to transfer the data. The problem faced with unmanned travel is the time that will take to transmit the information from the destination. Project Longshot had this vision. A report from the project stated that

Due to the great distance at which the probe will operate, positive control from earth will be impossible due to the great time delays involved. This fact necessitates that the probe be able to think for itself. In order to accomplish this, advances will be required in two related but separate fields, artificial intelligence and computer hardware. AI research is advancing at a tremendous rate. Progress during the last decade has been phenomenal and there is no reason to expect it to slow any time soon. Therefore, it should possible to design a system with the required intelligence by the time that this mission is expected to be 1aunched.

This report was written way back in the late 80s. There has been a significant advance in the area of artificial intelligence and we might soon see an advanced version of project Longshot being launched. This will require a way to minimize the effects of nuclear radioactive radiation and wastes.  Nuclear fusion can be an answer to this, but controlled nuclear fusion isnt still achieved on such a large scale. However the rate at which the technology is advancing we can say proudly that interstellar travel is just a matter of a few more decades.