History of Silicon
- Silicon: The Starting Material of Integrated Circuit
Silicon is one of nature's most useful elements. Silicon is the material most commonly used for the manufacturing of semiconductors. Silicon, as a pure chemical element, is not found free in nature. It exists primarily in compound form with other chemical elements. In all of its various forms, silicon makes up 25.7% of the earth's crust, and is the second most abundant element in the periodic table of elements. It is exceeded only by oxygen. Silicon occurs chiefly as a compound of silicon and oxygen called an oxide or as a compound of silicon and salts called a silicate. Silicon in the form of an oxide most commonly occurs as silicon dioxide, SiO2, generally called silica, or sand. Other common forms of silicon dioxide are quartzite, quartz, rock crystal, amethyst, agate, flint, jasper, and opal.
- Glass - The First Silicon Revolution
The original silicon revolution was of course, glass. Man first began to explore its properties a million and a half years ago - that's when our ancient ancestors discovered that obsidian, the almost jet black glass which is sometimes formed when lava cools rapidly, was useful. Obsidian breaks leaving a very keen edge, so was good for weapons and tools including, in some ancient cultures, knives used for ritual circumcisions.
But it wasn't until the first civilizations arose in the plains of Mesopotamia that we learned to actually make glass. The recipe is simple, but must be followed carefully. The main ingredient, silicon dioxide (SiO2) (or silica), is everywhere. Three-quarters of the earth's crust is made up of this compound of oxygen and silicon. Silica is the basis of most rocks, the reason they all seem so different is because of the different processes by which they are formed and the different crystals silica creates with other compounds.
Having got your silica, usually in the form of sand, you heat it until it melts - about 1,600C. You melt in a little soda ash and dash of limestone and then cool the mixture fairly quickly. With any luck - and it has taken 5,000 years to perfect the process - you'll have created an "amorphous solid", which is what glass is.
What that means is that the atoms in glass are locked in place but, instead of forming neat orderly crystals, they are arranged randomly - so glass is rigid, like a solid, but has the disordered arrangement of molecules of a liquid.
Once we'd discovered we could create this incredible tough-but-see-through stuff there was no stopping us. Think what life would be like without glass windows, windscreens and bottles and what would the world's scientists do without the lenses in their microscopes or telescopes?
- Semiconductor - The Second Silicon Revolution
The next silicon revolution was based on a very different form of the element. We are talking about the computing revolution driven by microprocessors etched into silicon chip. The silicon in these chips uses silica that has been stripped of its two oxygen molecules and refined into one of the purest materials on the planet - 99.9999999% pure silicon - the standard level of purity in the microprocessor industry.
The answer is the fact that it is a semiconductor - a substance whose electrical conductivity can be manipulated. Computer chips are in essence tiny assault courses for electrons. The whole of the semiconductor industry is based on deliberately adding impurities to tweak the behaviour of the silicon. These impurities create tiny obstacles for the electrons to negotiate. You can turn the obstacles on and off to tweak the behaviour of the electrons and vast numbers of these obstacles allow us to do all of the logic functions associated with a computer processor.
The obstacles are known as transistors. The first integrated circuit - as computer chips were originally known - was a relatively simple affair. It was created by an engineer named Jack Kilby at Texas Instruments and demonstrated on 12 September 1958. Kilby's chip was made of germanium, another semiconducting element. But within months a team led by Robert Noyce at a rival company, Fairchild Electronics, created a chip based on silicon. The entire modern computing industry can trace its lineage back to this one chip, though modern chips are millions of times more complex.
Indeed, the miracle of the modern microprocessor is the vast number of transistors the industry has learned to pack on to a tiny wafer of silicon. It's why even tiny devices can have incredible computing power these days.
- Moore's Law
Gordon Moore, who first realised how quickly the power of computers would multiply. A few years after that first silicon chip was created, he predicted that the number of transistors on a chip would double roughly every two years.
Even he didn't expect what became known as Moore's Law to hold for more than a couple decades, but it has - thanks, in good part, to the innovations of company that he and Noyce founded, Intel, the biggest computer chip manufacturer in the world in terms of revenue.
In the company's in-house museum there is a display that graphically illustrates Moore's law in action. The first chip, produced in 1969, contains 1,200 transistors. By 1972 that had almost doubled to 2,500. It went on doubling and then doubling again - Intel's latest chips have two billion transistors or more packed on to a single tiny chip of silicon - and almost 50 years after he first formulated his law we are still asking how long this incredible miniaturisation can continue.
"I've been in the industry long enough to remember when the experts were saying you cannot make devices smaller than 100 nanometres," says Mark Bohr, the man in charge of working out how Intel can pack even more transistors on to even smaller slices of silicon. "Now we are making devices that are 10 nanometres in size and we don't see an end to it yet."
To give you a sense of scale, a human red blood cell is about 4,000 nanometres across.
But he admits that operating at this nano scale does produce weird and fascinating new challenges. One is a phenomenon called "quantum tunnelling". That happens when the circuits are so small that you can't say with any certainty where an electron is, you can only attach a probability to where it might be.
It means electrons "jump" or "tunnel" across all those carefully created obstacles. And this creates all sorts of problems. It can lead to power draining or "leaking" from chips, it can stop your chip working at all, and it can make your chips get very hot.
In response, the computer industry has had to completely redesign the transistors. New materials have been introduced - including the element hafnium - and new more complicated layered structures.
But Bohr acknowledges that tackling these challenges isn't delivering the same increases in processing speed we used to see. That's why, as you may have noticed, desktop computers haven't got much better in the last 10 years or so, but you can now cram the same processing power into a much smaller device, like a smartphone.