Tuesday, 2 September 2014



Inertia
Inertia is the resistance of any physical object to any change in its state of motion, including changes to its speed and direction. It is the tendency of objects to keep moving in a straight line at constant velocity. The principle of inertia is one of the fundamental principles of classical physics that are used to describe the motion of objects and how they are affected by applied forces. Inertia comes from the Latin word, iners, meaning idle, sluggish. Inertia is one of the primary manifestations ofmass, which is a quantitative property of physical systems. Isaac Newton defined inertia as his first law in his PhilosophiƦ Naturalis Principia Mathematica, which states:[1]
The vis insita, or innate force of matter, is a power of resisting by which every body, as much as in it lies, endeavours to preserve its present state, whether it be of rest or of moving uniformly forward in a straight line.
In common usage the term "inertia" may refer to an object's "amount of resistance to change in velocity" (which is quantified by its mass), or sometimes to its momentum, depending on the context. The term "inertia" is more properly understood as shorthand for "the principle of inertia" as described by Newton in his First Law of Motion: that an object not subject to any net external force moves at a constant velocity. Thus, an object will continue moving at its current velocity until some force causes its speed or direction to change.
On the surface of the Earth inertia is often masked by the effects of friction and air resistance, both of which tend to decrease the speed of moving objects (commonly to the point of rest), and gravity. This misled classical theorists such as Aristotle, who believed that objects would move only as long as force was applied to them



Classical inertia[edit]


Galileo Galilei
The law of inertia states that it is the tendency of an object to resist a change in motion. According to Newton, an object will stay at rest or stay in motion (i.e. 'maintain its velocity' in modern terms) unless acted on by a net external force, whether it results from gravityfriction, contact, or some other source. The Aristotelian division of motion into mundane and celestial became increasingly problematic in the face of the conclusions of Nicolaus Copernicus in the 16th century, who argued that the earth (and everything on it) was in fact never "at rest", but was actually in constant motion around the sun.[11] Galileo, in his further development of the Copernican model, recognized these problems with the then-accepted nature of motion and, at least partially as a result, included a restatement of Aristotle's description of motion in a void as a basic physical principle:
A body moving on a level surface will continue in the same direction at a constant speed unless disturbed.[12]
Galileo writes that 'all external impediments removed, a heavy body on a spherical surface concentric with the earth will maintain itself in that state in which it has been; if placed in movement towards the west (for example), it will maintain itself in that movement'.[13] This notion which is termed 'circular inertia' or 'horizontal circular inertia' by historians of science, is a precursor to, but distinct from, Newton's notion of rectilinear inertia.[14][15] For Galileo, a motion is 'horizontal' if it does not carry the moving body towards or away from the centre of the earth, and for him 'a ship, for instance, having once received some impetus through the tranquil sea, would move continually around our globe without ever stopping'.[16][17]
It is also worth noting that Galileo later went on to conclude that based on this initial premise of inertia, it is impossible to tell the difference between a moving object and a stationary one without some outside reference to compare it against.[18] This observation ultimately came to be the basis for Einstein to develop the theory ofSpecial Relativity.
Concepts of inertia in Galileo's writings would later come to be refined, modified and codified by Isaac Newton as the first of his Laws of Motion (first published in Newton's work, Philosophiae Naturalis Principia Mathematica, in 1687):
Unless acted upon by a net unbalanced force, an object will maintain a constant velocity.
Note that "velocity" in this context is defined as a vector, thus Newton's "constant velocity" implies both constant speed and constant direction (and also includes the case of zero speed, or no motion). Since initial publication, Newton's Laws of Motion (and by extension this first law) have come to form the basis for the branch ofphysics known as classical mechanics.[citation needed]
The actual term "inertia" was first introduced by Johannes Kepler in his Epitome Astronomiae Copernicanae (published in three parts from 1618–1621); however, the meaning of Kepler's term (which he derived from the Latin word for "idleness" or "laziness") was not quite the same as its modern interpretation. Kepler defined inertia only in terms of a resistance to movement, once again based on the presumption that rest was a natural state which did not need explanation. It was not until the later work of Galileo and Newton unified rest and motion in one principle that the term "inertia" could be applied to these concepts as it is today.[citation needed]
Nevertheless, despite defining the concept so elegantly in his laws of motion, even Newton did not actually use the term "inertia" to refer to his First Law. In fact, Newton originally viewed the phenomenon he described in his First Law of Motion as being caused by "innate forces" inherent in matter, which resisted any acceleration. Given this perspective, and borrowing from Kepler, Newton actually attributed the term "inertia" to mean "the innate force possessed by an object which resists changes in motion"; thus Newton defined "inertia" to mean the cause of the phenomenon, rather than the phenomenon itself. However, Newton's original ideas of "innate resistive force" were ultimately problematic for a variety of reasons, and thus most physicists no longer think in these terms. As no alternate mechanism has been readily accepted, and it is now generally accepted that there may not be one which we can know, the term "inertia" has come to mean simply the phenomenon itself, rather than any inherent mechanism. Thus, ultimately, "inertia" in modern classical physics has come to be a name for the same phenomenon described by Newton's First Law of Motion, and the two concepts are now considered to be equivalent.

Relativity[edit]

Albert Einstein's theory of special relativity, as proposed in his 1905 paper, "On the Electrodynamics of Moving Bodies," was built on the understanding of inertia andinertial reference frames developed by Galileo and Newton. While this revolutionary theory did significantly change the meaning of many Newtonian concepts such asmassenergy, and distance, Einstein's concept of inertia remained unchanged from Newton's original meaning (in fact the entire theory was based on Newton's definition of inertia). However, this resulted in a limitation inherent in special relativity that the principle of relativity could only apply to reference frames that wereinertial in nature (meaning when no acceleration was present). In an attempt to address this limitation, Einstein proceeded to develop his general theory of relativity("The Foundation of the General Theory of Relativity," 1916), which ultimately provided a unified theory for both inertial and noninertial (accelerated) reference frames. However, in order to accomplish this, in general relativity Einstein found it necessary to redefine several fundamental concepts (such as gravity) in terms of a new concept of "curvature" of space-time, instead of the more traditional system of forces understood by Newton.[19]
As a result of this redefinition, Einstein also redefined the concept of "inertia" in terms of geodesic deviation instead, with some subtle but significant additional implications. The result of this is that according to general relativity, when dealing with very large scales, the traditional Newtonian idea of "inertia" does not actually apply, and cannot necessarily be relied upon. Luckily, for sufficiently small regions of spacetime, the special theory can be used, in which inertia still means the same (and works the same) as in the classical model.[dubious ]
Another profound conclusion of the theory of special relativity, perhaps the most well-known, was that energy and mass are not separate things, but are, in fact, interchangeable. This new relationship, however, also carried with it new implications for the concept of inertia. The logical conclusion of special relativity was that if mass exhibits the principle of inertia, then inertia must also apply to energy. This theory, and subsequent experiments confirming some of its conclusions, have also served to radically expand the definition of inertia in some contexts to apply to a much wider context including energy as well as matter.[citation needed]

Interpretations

Mass and inertia

Physics and mathematics appear to be less inclined to use the popular concept of inertia as "a tendency to maintain momentum" and instead favor the mathematically useful definition of inertia as the measure of a body's resistance to changes in velocity or simply a body's inertial mass.
This was clear in the beginning of the 20th century, when the theory of relativity was not yet created. Mass, m, denoted something like an amount of substance or quantity of matter. And at the same time mass was the quantitative measure of inertia of a body.
The mass of a body determines the momentum p of the body at given velocity v; it is a proportionality factor in the formula:
p = mv
The factor m is referred to as inertial mass.
But mass, as related to the 'inertia' of a body, can also be defined by the formula:
F = ma
Here, F is force, m is inertial mass, and a is acceleration.
By this formula, the greater its mass, the less a body accelerates under given force. Masses m defined by formula (1) and (2) are equal because formula (2) is a consequence of formula (1) if mass does not depend on time and velocity. Thus, "mass is the quantitative or numerical measure of a body’s inertia, that is of its resistance to being accelerated".
This meaning of a body's inertia therefore is altered from the popular meaning as "a tendency to maintain momentum" to a description of the measure of how difficult it is to change the velocity of a body. But it is consistent with the fact that motion in one reference frame can disappear in another, so it is the change in velocity that is important.

Inertial mass[edit]

There is no measurable difference between gravitational mass and inertial mass. The gravitational mass is defined by the quantity of gravitational field material a mass possesses, including its energy. The "inertial mass" (relativistic mass) is a function of the acceleration a mass has undergone and its resultant speed. A mass that has been accelerated to speeds close to the speed of light has its "relativistic mass" increased, and that is why the magnetic field strength in particle accelerators must be increased to force the mass's path to curve. In practice, "inertial mass" is normally taken to be "invariant mass" and so is identical to gravitational mass without the energy component.
Gravitational mass is measured by comparing the force of gravity of an unknown mass to the force of gravity of a known mass. This is typically done with some sort of balance. Equal masses will match on a balance because the gravitational field applies to them equally, producing identical weight. This assumption breaks down near supermassive objects such as black holes and neutron stars due to tidal effects. It also breaks down in weightless environments, because no matter what objects are compared, it will yield a balanced reading.
Inertial mass is found by applying a known net force to an unknown mass, measuring the resulting acceleration, and applying Newton's Second Law, m = F/a. This gives an accurate value for mass, limited only by the accuracy of the measurements. When astronauts need to be measured in the weightlessness of free fall, they actually find their inertial mass in a special chair called a body mass measurement device (BMMD).
At high speeds, and especially near the speed of light, inertial mass can be determined by measuring the magnetic field strength and the curvature of the path of an electrically-charged mass such as an electron.
No physical difference has been found between gravitational and inertial mass in a given inertial frame. In experimental measurements, the two always agree within the margin of error for the experiment. Einstein used the fact that gravitational and inertial mass were equal to begin his general theory of relativity in which he postulated that gravitational mass was the same as inertial mass, and that the acceleration of gravity is a result of a 'valley' or slope in the space-time continuum that masses 'fell down' much as pennies spiral around a hole in the common donation toy at a chain store. Dennis Sciama later showed that the reaction force produced by the combined gravity of all matter in the universe upon an accelerating object is mathematically equal to the object's inertia [1], but this would only be a workable physical explanation if by some mechanism the gravitational effects operated instantaneously.
At high speeds, relativistic mass always exceeds gravitational mass. If the mass is made to travel close to the speed of light, its "inertial mass" (relativistic) as observed from a stationary frame would be very great while its gravitational mass would remain at its rest value, but the gravitational effect of the extra energy would exactly balance the measured increase in inertial mass.

Inertial frames[edit]

In a location such as a steadily moving railway carriage, a dropped ball (as seen by an observer in the carriage) would behave as it would if it were dropped in a stationary carriage. The ball would simply descend vertically. It is possible to ignore the motion of the carriage by defining it as an inertial frame. In a moving but non-accelerating frame, the ball behaves normally because the train and its contents continue to move at a constant velocity. Before being dropped, the ball was traveling with the train at the same speed, and the ball's inertia ensured that it continued to move in the same speed and direction as the train, even while dropping. Note that, here, it is inertia which ensured that, not its mass.
In an inertial frame all the observers in uniform (non-accelerating) motion will observe the same laws of physics. However observers in another inertial frame can make a simple, and intuitively obvious, transformation (the Galilean transformation), to convert their observations. Thus, an observer from outside the moving train could deduce that the dropped ball within the carriage fell vertically downwards.
However, in reference frames which are experiencing acceleration (non-inertial reference frames), objects appear to be affected by fictitious forces. For example, if the railway carriage were accelerating, the ball would not fall vertically within the carriage but would appear to an observer to be deflected because the carriage and the ball would not be traveling at the same speed while the ball was falling. Other examples of fictitious forces occur in rotating frames such as the earth. For example, a missile at the North Pole could be aimed directly at a location and fired southwards. An observer would see it apparently deflected away from its target by a force (the Coriolis force) but in reality the southerly target has moved because earth has rotated while the missile is in flight. Because the earth is rotating, a useful inertial frame of reference is defined by the stars, which only move imperceptibly during most observations.The law of inertia is also known as Isaac Newton's first law of motion.
In summary, the principle of inertia is intimately linked with the principles of conservation of energy and conservation of momentum.

Source of inertia; speculative theories[edit]

Various efforts by notable physicists such as Ernst Mach (see Mach's principle), Albert EinsteinDennis William Sciama, and Bernard Haisch have been put towards the study and theorizing of inertia. "An object at rest tends to stay at rest. An object in motion tends to stay in motion."

Rotational inertia[edit]

Another form of inertia is rotational inertia (→ moment of inertia), which refers to the fact that a rotating rigid body maintains its state of uniform rotational motion. Itsangular momentum is unchanged, unless an external torque is applied; this is also called conservation of angular momentum. Rotational inertia depends on the object remaining structurally intact as a rigid body, and also has practical consequences; For example, a gyroscope uses the property that it resists any change in the axis of rotation.

Tuesday, 5 August 2014

How does war affect our bodies and minds?

How does war affect our bodies and minds? The creators of Broken Bodies, Suffering Spirits: Injury, Death, and Healing in Civil War Philadelphia, a long-term exhibit at The Mütter Museum of the College of Physicians of Philadelphia, wanted to provide the visitor with rare insight into the experience of losing a limb at this time in history. 

In addition to the display of anatomical specimens (including bones shattered by bullets), medical instruments, personal memorabilia from the war, and vivid eyewitness accounts and images, the exhibit invites the visitor to enter into a Victorian-era black-box illusion, with a 21st-century twist. 

The “Phantom Limb” experience allows visitors to configure their personal characteristics on a touch screen and then enter a curtained room in which they (and they alone) see their arm being shot, infected, and amputated, complete with phantom limb syndrome. Part parlor trick and part interactive technology, the illusion combines a conventional mirror with an embedded a video monitor displaying a historically accurate and anatomically correct limb.

12 Technological Advancements of World War

IMAGE CREDIT: 
GETTY IMAGES
Erik Sass has been covering the events leading up to World War I exactly 100 years after they happened. But today he's here to discuss some inventions of The Great War.

1. TANKS

In 1914, the “war of movement” expected by most European generals settled down into an unexpected, and seemingly unwinnable, war of trenches. With machine guns reinforcing massed rifle fire from the defending trenches, attackers were mowed down by the thousands before they could even get to the other side of “no-man’s-land.”
A solution presented itself, however, in the form of the automobile, which took the world by storm after 1900. Powered by a small internal combustion engine burning diesel or gas, a heavily-armored vehicle could advance even in the face of overwhelming small arms fire. Add some serious guns and replace the wheels with armored treads to handle rough terrain, and the tank was born.
The first tank, the British Mark I, was designed in 1915 and first saw combat at the Somme in September 1916. The French soon followed suit with the Renault FT, which established the classic tank look (turret on top). Despite their later prowess in tank combat in WWII, the Germans never got around to large-scale tank production in WWI, although they did produce 21 tanks in the unwieldy A7V model.

2. FLAMETHROWERS

Although the Byzantines and Chinese used weapons that hurled flaming material in the medieval period, the first design for a modern flamethrower was submitted to the German Army by Richard Fiedler in 1901, and the devices were tested by the Germans with an experimental detachment in 1911. Their true potential was only realized during trench warfare, however. After a massed assault on enemy lines, it wasn’t uncommon for enemy soldiers to hole up in bunkers and dugouts hollowed into the side of the trenches. Unlike grenades, flamethrowers could “neutralize” (i.e. burn alive) enemy soldiers in these confined spaces without inflicting structural damage (the bunkers might come in handy for the new residents). The flamethrower was first used by German troops near Verdun in February 1915.

3. POISON GAS


Getty Images
Poison gas was used by both sides with devastating results (well, sometimes) during the Great War. The Germans pioneered the large-scale use of chemical weapons with a gas attack on Russian positions on January 31, 1915, during the Battle of Bolimov, but low temperatures froze the poison (xylyl bromide) in the shells. The first successful use of chemical weapons occurred on April 22, 1915, near Ypres, when the Germans sprayed chlorine gas from large cylinders towards trenches held by French colonial troops. The defenders fled, but typically for the First World War, this didn’t yield a decisive result: the Germans were slow to follow up with infantry attacks, the gas dissipated, and the Allied defenses were restored. Before long, of course, the Allies were using poison gas too, and over the course of the war both sides resorted to increasingly insidious compounds to beat gas masks, another new invention; thus the overall result was a huge increase in misery for not much change in the strategic situation (a recurring theme of the war).

4. TRACER BULLETS


Photo courtesy of Military Cartridges
While the Great War involved a lot of futile activity, fighting at night was especially unproductive because there was no way to see where you were shooting. Night combat was made somewhat easier by the British invention of tracer bullets—rounds which emitted small amounts of flammable material that left a phosphorescent trail. The first attempt, in 1915, wasn’t actually that useful, as the trail was “erratic” and limited to 100 meters, but the second tracer model developed in 1916, the .303 SPG Mark VIIG, emitted a regular bright green-white trail and was a real hit (get it?). Its popularity was due in part to an unexpected side-benefit: the flammable agent could ignite hydrogen, which made it perfect for “balloon-busting” the German zeppelins then terrorizing England.

5. INTERRUPTER GEAR

Airplanes had been around for just a decade when WWI started, and while they had obvious potential for combat applications as an aerial platform for bombs and machine guns, it wasn’t quite clear how the latter would work, since the propeller blades got in the way. In the first attempt, the U.S. Army basically tied the gun to the plane (pointing towards the ground) with a leather strap, and it was operated by a gunner who sat beside the pilot. This was not ideal for aerial combat and inconvenient because it required two airmen to operate. Another solution was mounting the gun well above the pilot, so the bullets cleared the propeller blades, but this made it hard to aim. After the Swiss engineer Franz Schneider patented his idea for an interrupter gear in 1913, a finished version was presented by Dutch designer Anthony Fokker, whose “synchronizer,” centered on a cam attached to the propeller shaft, allowed a machine gun to fire between the blades of a spinning propeller. The Germans adopted Fokker’s invention in May 1915, and the Allies soon produced their own versions. Schneider later sued Fokker for patent infringement.

6. AIR TRAFFIC CONTROL

In the first days of flight, once a plane left the ground the pilot was pretty much isolated from the terrestrial world, unable to receive any information aside from obvious signals using flags or lamps. This changed thanks to the efforts of the U.S. Army, which installed the first operational two-way radios in planes during the Great War (but prior to U.S. involvement). Development began in 1915 at San Diego, and by 1916 technicians could send a radio telegraph over a distance of 140 miles; radio telegraph messages were also exchanged between planes in flight. Finally, in 1917, for the first time a human voice was transmitted by radio from a plane in flight to an operator on the ground.

7. DEPTH CHARGES

The German U-boat campaign against Allied shipping sank millions of tons of cargo and killed tens of thousands of sailors and civilians, forcing the Allies to figure out a way to combat the submarine menace. The solution was the depth charge, basically an underwater bomb that could be lobbed from the deck of a ship using a catapult or chute. Depth charges were set to go off at a certain depth by a hydrostatic pistol that measured water pressure, insuring the depth charge wouldn’t damage surface vessels, including the launch ship. After the idea was sketched out in 1913, the first practical depth charge, the Type D, was produced by the Royal Navy’s Torpedo and Mine School in January 1916. The first German U-boat sunk by depth charge was the U-68, destroyed on March 22, 1916.

8. HYDROPHONES

Of course it was a big help if you could actually locate the U-boat using sound waves, which required a microphone that could work underwater, or hydrophone. The first hydrophone was invented by 1914 by Reginald Fessenden, a Canadian inventor who actually started working on the idea as a way to locate icebergs following the Titanic disaster; however, it was of limited use because it couldn’t tell the direction of an underwater object, only the distance. The hydrophone was further improved by the Frenchman Paul Langevin and Russian Constantin Chilowsky, who invented an ultrasound transducer relying on piezoelectricity, or the electric charge held in certain minerals: a thin layer of quartz held between two metal plates responded to tiny changes in water pressure resulting from sound waves, allowing the user to determine both the distance and direction of an underwater object. The hydrophone claimed its first U-boat victim in April 1916. A later version perfected by the Americans could detect U-boats up to 25 miles away.

9. AIRCRAFT CARRIERS

The first time an airplane was launched from a moving ship was in May 1912, when commander Charles Rumney Samson piloted a Short S.27 pontoon biplane from a ramp on the deck of the HMS Hibernia in Weymouth Bay. However, the Hibernia wasn’t a true aircraft carrier, since planes couldn’t land on its deck; they had to set down on the water and then be retrieved, slowing the whole process considerably. The first real aircraft carrier was the HMS Furious, which began life as a 786-foot-long battle cruiser equipped with two massive 18-inch guns—until British naval designers figured out that these guns were so large they might shake the ship to pieces. Looking for another use for the vessel, they built a long platform capable of both launching and landing airplanes. To make more room for takeoffs and landings, the airplanes were stored in hangars under the runway, as they still are in modern aircraft carriers. Squadron Commander Edward Dunning became the first person to land a plane on a moving ship when he landed a Sopwith Pup on the Furious on August 2, 1917.

10. PILOTLESS DRONES

The first pilotless drone was developed for the U.S. Navy in 1916 and 1917 by two inventors, Elmer Sperry and Peter Hewitt, who originally designed it as an unmanned aerial bomb—essentially a prototype cruise missile. Measuring just 18.5 feet across, with a 12-horsepower motor, the Hewitt-Sperry Automatic Aircraft weighed 175 pounds and was stabilized and directed (“piloted” is too generous) with gyroscopes and a barometer to determine altitude. The first unmanned flight in history occurred on Long Island on March 6, 1918. In the end, the targeting technique—point and fly—was too imprecise for it to be useful against ships during the war. Further development, by attempting to integrate remote radio control, continued for several years after the war, until the Navy lost interest in 1925.

11. MOBILE X-RAY MACHINES

With millions of soldiers suffering grievous, life-threatening injuries, there was obviously a huge need during the Great War for the new wonder weapon of medical diagnostics, the X-ray—but these required very large machines that were both too bulky and too delicate to move. Enter Marie Curie, who set to work creating mobile X-ray stations for the French military immediately after the outbreak of war; by October 1914, she had installed X-ray machines in several cars and small trucks which toured smaller surgical stations at the front. By the end of the war there were 18 of these “radiologic cars” or “Little Curies” in operation. African-American inventor Frederick Jones developed an even smaller portable X-ray machine in 1919 (Jones also invented refrigeration units, air conditioning units, and the self-starting gasoline lawnmower).

12. SANITARY NAPKINS

Women traditionally improvised all kinds of disposable or washable undergarments to deal with their monthly period, all the way back to softened papyrus in ancient Egypt. But the modern sanitary napkin as we know it was made possible by the introduction of new cellulose bandage material during the First World War; it wasn’t long before French nurses figured out that clean, absorbent cellulose bandages were far superior to any predecessors. British and American nurses picked up on the habit, and corporate America wasn’t far behind: In 1920, Kimberly-Clark introduced the first commercial sanitary napkin, Kotex (that’s “cotton” + “texture”). But it was rough going at first, as no publications would carry advertisements for such a product. It wasn’t until 1926 that Montgomery Ward broke the barrier, carrying Kotex napkins in its popular catalogue.

New World


New World



The New World is one of the names used for the Western Hemisphere, specifically the Americas, certain Atlantic and Pacificoceanic islands to which the closest continental shelf is that of the Americas (such as Bermuda), and sometimes Oceania(Australasia). The term originated in the early 16th century after Europeans made landfall in what would later be called "the Americas" in the age of discovery, expanding the geographical horizon of the people of the Middle Ages, who had thought of the world as consisting of AfricaAsia, and Europe only, collectively now referred to as the Old World. The term was first coined by Florentine explorer Amerigo Vespucci. The Americas were also referred to as the "fourth part of the world"

Saturday, 28 June 2014

Which technology is upcoming
 Gesture recognition systems are creeping onto the market now. Toshiba, a pioneer in this market, has at least one product out that supports an early version of the technology: the Qosmio G55 laptop, which can recognize gestures to control multimedia playback. The company is also experimenting with a TV version of the technology, which would watch for hand signals via a small camera atop the set. Based on my tests, though, the accuracy of these systems still needs a lot of work.
Gesture recognition is a neat way to pause the DVD on your laptop, but it probably remains a way off from being sophisticated enough for broad adoption. All the same, its successful development would excite tons of interest from the "can't find the remote" crowd. Expect to see gesture recognition technology make some great strides over the next few years, with inroads into mainstream markets by 2012.

New technology war
Core war
32-Core CPUs From Intel and AMD
If your CPU has only a single core, it's officially a dinosaur. In fact,quad-core computing is now commonplace; you can even get laptop computers with four cores today. But we're really just at the beginning of the core wars: Leadership in the CPU market will soon be decided by who has the most cores, not who has the fastest clock speed.
What is it? With the gigahertz race largely abandoned, both AMDand Intel are trying to pack more cores onto a die in order to continue to improve processing power and aid with multitasking operations. Miniaturizing chips further will be key to fitting these cores and other components into a limited space. Intel will roll out 32-nanometer processors (down from today's 45nm chips) in 2009.
When is it coming? Intel has been very good about sticking to its road map. A six-core CPU based on the Itanium design should be out imminently, when Intel then shifts focus to a brand-new architecture called Nehalem, to be marketed as Core i7. Core i7 will feature up to eight cores, with eight-core systems available in 2009 or 2010. (And an eight-core AMD project called Montreal is reportedly on tap for 2009.)
After that, the timeline gets fuzzy. Intel reportedly canceled a 32-core project called Keifer, slated for 2010, possibly because of its complexity (the company won't confirm this, though). That many cores requires a new way of dealing with memory; apparently you can't have 32 brains pulling out of one central pool of RAM. But we still expect cores to proliferate when the kinks are ironed out: 16 cores by 2011 or 2012 is plausible (when transistors are predicted to drop again in size to 22nm), with 32 cores by 2013 or 2014 easily within reach. Intel says "hundreds" of cores may come even farther down the line.