Wednesday, April 29, 2009

HISTORY OF COMPUTER GRAPHICS 1970-79

The 1970s saw the introduction of computer graphics in the world of television. Computer Image Corporation (CIC) developed complex hardware and software systems such as ANIMAC, SCANIMATE and CAESAR. All of these systems worked by scanning in existing artwork, then manipulating it, making it squash, stretch, spin, fly around the screen, etc. . . Bell Telephone and CBS Sports were among the many who made use of the new computer graphics.

While flat shading can make an object look as if it's solid, the sharp edges of the polygons can detract from the realism of the image. While one can create smaller polygons (which also means more polygons), this increases the complexity of the scene, which in turn slows down the performance of the computer rendering the scene. To solve this, a Henri Gouraud in 1971 presented a method for creating the appearance of a curved surface by interpolating the color across the polygons. This method of shading a 3D object has since come to be known as Gouraud shading. One of the most impressive aspects of Gouraud shading is that it hardly takes any more computations than Flat shading, yet provides a dramatic increase in rendering quality. One thing that Gouraud shading can't fix is the visible edge of the object. The original flat polygons making up the torus are still visible along the edges of the object.

One of the most important advancements to computer graphics appeared on the scene in 1971, the microprocessor. Using Integrated Circuit technology developed in 1959, the electronics of a computer processor were miniaturized down to a single chip, the microprocessor, sometimes called a CPU (Central Processing Unit). One of the first desktop microcomputers designed for personal use was the Altair 8800 from Micro Instrumentation Telemetry Systems (MITS). Coming through mail-order in kit form, the Altair (named after a planet in the popular Star Trek series) retailed for around $400. Later personal computers would advance to the point where film-quality computer graphics could be created on them.

In that same year, Nolan Kay Bushnell along with a friend formed Atari. He would go on to create an arcade video game called Pong in 1972 and start an industry that continues even today to be one of the largest users of computer graphics technology.

In the 1970's a number of animation houses were formed. In Culver City, California, Information International Incorporated (better known as Triple I) formed a motion picture computer graphics department. In San Rafael, California, George Lucas formed Lucasfilm. In Los Angeles, Robert Abel & Associates and Digital Effects were formed. In Elmsford, New York, MAGI was formed. In London, England, Systems Simulation Ltd. was formed. Of all these companies, almost none of them would still be in business ten years later. At Abel & Associates, Robert Abel hired Richard Edlund to help with computer motion control of cameras. Edlund would later get recruited to Lucasfilm to work on Star Wars, and eventually to establish Boss Film Studios creating special effects for movies and motion pictures and winning four Academy Awards.

In 1970 Gary Demos was a senior at Caltech when we saw the work of John Whitney Sr. This immediately developed an interest in him for computer graphics. This interest was further developed when he saw work done at Evans & Sutherland, along with the animation that was coming out of the University of Utah. So in 1972 Demos went to work for E&S. At that time they used Digital PDP-11 computers along with the custom built hardware that E&S was becoming famous for. These systems included the Picture System that featured a graphics tablet and color frame buffer (originally designed by UU).

It was at E&S that Demos met John Whitney Jr., the son of the original graphics pioneer. E&S started to work on some joint projects with Triple I. Founded in 1962, Triple I was in the business of creating digital scanners and other image processing equipment. Between E&S and Triple I there was a Picture Design Group. After working on a few joint projects between E&S and Triple I, Demos and Whitney left E&S to join Triple I and form the Motion Picture Products group in late 1974. At Triple I, they used PDP-10s and a Foonley Machine (which was a custom PDP-10). They developed another frame buffer that used 1000 lines; they also built custom film recorders and scanners along with custom graphics processors, image accelerators and the software to run it. This development led to the first use of computer graphics for motion pictures in 1973 when Whitney and Demos worked on the motion picture "Westworld". They used a technique called pixellization which is a computerized mosaic created by breaking up a picture into large color blocks. This is done by dividing up the picture into square areas, and then averaging the colors into one color within that area.

In 1973 the Association of Computing Machinery's (ACM) Special Interest Group on Computer Graphics (SIGGRAPH) held its first conference. Solely devoted to computer graphics, the convention attracted about 1,200 people and was held in a small auditorium. Since the 1960's the University of Utah had been the focal point for research on 3D computer graphics and algorithms. For the research, the classes set up various 3D models such as a VW Beetle, a human face, and the most popular, a teapot. It was in 1975 that a M. Newell developed the Utah teapot, and throughout the history of 3D computer graphics it has served as a benchmark, and today it's almost an icon for 3D computer graphics. The original teapot that Newell based his computer model on can be seen at the Boston Computer Museum displayed next to a computer rendering of it.

Ed Catmull received his Ph. D. in computer science in 1974 and his thesis covered Texture Mapping, Z-Buffer and rendering curved surfaces. Texture mapping brought computer graphics to a new level of realism. Catmull had come up with the idea of texture mapping while sitting in his car in a parking lot at UU and talking to another student, Lance Williams, about creating a 3D castle. Most objects in real life have very rich and detailed surfaces, such as the stones of a castle wall, the material on a sofa, the wallpaper on a wall, the wood veneer on a kitchen table. Catmull realized that if you could apply patterns and textures to real-life objects, you could do the same for their computer counterparts. Texture mapping is the method of taking a flat 2D image of what an object's surface looks like, and then applying that flat image to a 3D computer generated object. Much in the same way that you would hang wallpaper on a blank wall.

The z-buffer aided the process of hidden surface removal by using zels which are similar to pixels but instead of recording the luminance of a specific point in an image, they record the depth of that point. The letter "z" reflecting the depth (as does Y for vertical position and X for horizontal position). The z-buffer was then an area of memory devoted to holding the depth data for every pixel in an image. Today high-performance graphics workstations have a z-buffer built-in.

While Gouraud shading was a great improvement over Flat shading, it still had a few problems as to its realism. If you look closely at the Gouraud shaded torus you will notice slight variations in the shading that reveal the underlying polygons. These variations can also cause reflections to appear incorrectly or even disappear altogether in certain circumstances. This was corrected however by Phong Bui-Toung, a programmer at the UU (of course). Bui-Toung arrived at UU in 1971 and in 1974 he developed a new shading method that came to be known as Phong shading. After UU, Bui-Toung went on to Stanford as a professor, and early in 1975 he died of cancer. His shading method accurately interpolates the colors over a polygonal surface giving accurate reflective highlights and shading. The drawback to this is that Phong shading can be up to 100 times slower than Gouraud shading. Because of this, even today, when animators are creating small, flat 3D objects that are not central to the animation, they will use Gouraud shading on them instead of Phong. As with Gouraud shading, Phong shading cannot smooth over the outer edges of 3D objects.

A major breakthrough in simulating realism began in 1975 when the French mathematician, Dr. Benoit Mandelbrot published a paper called "A Theory of Fractal Sets." After some 20 years of research he published his findings and named them Fractal Geometry. To understand what a fractal is, consider that a straight line is a one-dimensional object, while a plane is a two-dimensional object. However, if the line curves around in such a way as to cover the entire surface of the plane, then it is no longer one dimensional, yet not quite two dimensional. Mandelbrot described it as a fractional dimension, between one and two.

To understand how this helps computer graphics, imagine creating a random mountain terrain. You may start with a flat plane, then tell the computer to divide the plane into four equal parts. Next the new center point is offset vertically some random amount. Following that, one of the new smaller squares is chosen, subdivided, with its center slightly off-set randomly. The process continues recursively until some limit is reached and all the squares are off-set.

Mandelbrot followed up his paper with a book entitled "The Fractal Geometry of Nature." This showed how his fractal principles could be applied to computer imagery to create realistic simulations of natural phenomena such as mountains, coastlines, wood grain, etc.

After graduating in 1974 from UU, Ed Catmull went to a company called Applicon. It didn't last very long however, because in November of that same year he was made an offer he couldn't refuse. Alexander Schure, founder of New York Institute of Technology (NYIT), had gone to the UU to see their computer graphics lab. Schure had a great interest in animation and had already established a traditional animation facility at NYIT. After seeing the setup at UU, he asked Evans what equipment he needed to create computer graphics. He told his people to "get me one of everything they have." The timing happened to be just right because UU was running out of funding at the time. Schure made Ed Catmull Director of NYIT's new Computer Graphics Lab. Then other talented people in the computer graphics field such as Malcolm Blanchard, Garland Stern and Lance Williams left UU and went to NYIT. Thus the leading center for computer graphics research soon switched from UU to NYIT.

One talented recruit was Alvy Ray Smith. As a young student at New Mexico State University in 1964, he had used a computer to create a picture of an equiangular spiral for a Nimbus Weather satellite. Despite this early success, Smith didn't take an immediate interest in computer graphics. He moved on to Stanford University, got his Ph.D., then promptly took his first teaching job at New York University. Smith recalls, "My chairman, Herb Freeman, was very interested in computer graphics, some of his students had made important advances in the field. He knew I was an artist and yet he couldn't spark any interest on my part, I would tell him 'If you ever get color I'll get interested.' Then one day I met Dr. Richard Shoup, and he told me about Xerox PARC (Palo Alto Research Center). He was planning on going to PARC to create a program that emulated painting on a computer the way an artist would naturally paint on a canvas."

Shoup had become interested in computer graphics while he was at Carnegie Mellon University. He then became a resident scientist at PARC and began working on a program he called "SuperPaint." It used one of the first color frame buffers ever built. At the same time Ken Knowlton at Bell Labs was creating his own paint program.

Smith on the other hand, wasn't thinking much about paint programs. In the meantime, he had broken his leg in a skiing accident and re-thought the path his life was taking. He decided to move back to California to teach at Berkeley in 1973. "I was basically a hippie, but one day I decided to visit my old friend, Shoup in Palo Alto. He wanted to show me his progress on the painting program, and I told him that I only had about an hour, and then I would need to get back to Berkeley. I was only visiting him as a friend, and yet when I saw what he had done with his paint program, I wound up staying for 12 hours! I knew from that moment on that computer graphics was what I wanted to do with my life." Smith managed to get himself hired by Xerox in 1974 and worked with Shoup in writing SuperPaint.

A few years later in 1975 in nearby San Jose, Alan Baum, a workmate of Steve Wozniak at Hewlett Packard, invited Wozniak to a meeting of the local Homebrew Computer Club. Homebrew, started by Fred Moore and Gorden French, was a club of amateur computer enthusiasts, and it soon was a hotbed of ideas about building your own personal computers. From the Altair 8800 to TV typewriters, the club discussed and built virtually anything that resembled a computer. It was a friend at the Homebrew club that first gave Wozniak a box full of electronic parts and it wasn't long before Wozniak was showing off his own personal computer/toy at the Homebrew meetings. A close friend of Wozniak, Steve Jobs, worked at Atari and help Wozniak develop his computer into the very first Apple computer. They built the units in a garage and sold them for $666.66.

In the same year William Gates III at the age of 19 dropped out of Harvard and along with his friend Paul Allen, founded a company called Microsoft. They wrote a version of the BASIC programming language for the Altair 8800 and put it on the market. Some five years later in 1980, when IBM was looking for an operating system to use with their new personal computer, they approached Microsoft and Gates remembered an operating system for Intel 8080 microprocessors written by Seattle Computer Products (SCP) called 86-DOS. Taking a gamble, Gates bought 86-DOS from SCP for $50,000, rewrote it, named it DOS and licensed it (smartly retaining ownership) to IBM as the operating system for their first personal computer. Today Microsoft dominates the personal computer software industry with gross annual sales of almost 4 billion dollars, and now it has moved into the field of 3D computer graphics.

Meanwhile back at PARC, Xerox had decided to focus solely on black and white computer graphics, dropping everything that was in color. So Alvy Ray Smith called Ed Catmull at NYIT and went out east with David DiFrancesco to meet with Catmull. Everyone hit it off, so Smith made the move from Xerox over to NYIT; this was about two months after Catmull had gotten there. The first thing Smith did was write a full color (24-bit) paint program, the first of its kind.

Later others joined NYIT's computer graphics lab including Tom Duff, Paul Heckbert, Pat Hanrahan, Dick Lundin, Ned Greene, Jim Blinn, Rebecca Allen, Bill Maher, Jim Clark, Thaddeus Beier, Malcom Blanchard and many others. In all, the computer graphics lab of NYIT would eventually be home to more than 60 employees. These individuals would continue to lead the field of computer graphics some twenty years later. The first computer graphics application NYIT focused on was 2D animation and creating tools to assist traditional animators. One of the tools that Catmull built was "Tween," a tool that interpolated in-between frames from one line drawing to another. They also developed a scan-and-paint system for scanning and then painting pencil-drawn artwork. This would later evolve into Disney's CAPS (Computer Animation Production System).

Next the NYIT group branched into 3D computer graphics. Lance Williams wrote a story for a movie called "The Works," sold the idea to Schure, and this movie became NYIT's major project for over two years. A lot of time and resources were spent in creating 3D models and rendering test animations. "NYIT in itself was a significant event in the history of computer graphics" explains Alvy Ray Smith. "Here we had this wealthy man, having plenty of money and getting us whatever we needed, we didn't have a budget, we had no goals, we just stretched the envelope. It was such an incredible opportunity, every day someone was creating something new. None of us slept, it was common to work 22 hour days. Everything you saw was something new. We blasted computer graphics into the world. It was like exploring a new continent."

However, the problem was that none of the people in the Computer Graphics Lab understood the scope of making a motion picture. "We were just a bunch of engineers in a little converted stable on Long Island, and we didn't know the first thing about making movies" said Beier (now technical director for Pacific Data Images). Gradually over a period of time, people became discouraged and left for other places. Smith continues, "It just wasn't happening. We all thought we would take part in making a movie. But at the time it would have been impossible with the speed of the computers." Alex Schure made an animated movie called "Tubby the Tuba" using conventional animation techniques, and it turned out to be very disappointing. "We realized then that he really didn't have what it takes to make a movie," explains Smith. Catmull agrees, "It was awful, it was terrible, half the audience fell asleep at the screening. We walked out of the screening room thinking 'Thank God we didn't have anything to do with it, that computers were not used for anything in that movie!'" The time was ripe for George Lucas.

Lucas, with the success of Star Wars under his belt, was interested in using computer graphics on his next movie, "The Empire Strikes Back". So he contacted Triple I, who in turn produced a sequence that showed five X-Wing fighters flying in formation. However disagreements over financial aspects caused Lucas to drop it and go back to hand-made models. The experience however showed that photorealistic computer imagery was a possibility, so Lucas decided to assemble his own Computer Graphics department within his special effects company, Lucasfilm. Lucas sent out a person to find the brightest minds in the world of Computer Graphics. He found NYIT. Initially the individual went to Carnegie Mellon University and talked to a professor who referred him to one of his students, Ralph Guggenheim, who referred him to Catmull at NYIT. After a few discussions, Catmull flew out to the west coast and met with Lucas and accepted his offer.

Initially only five from NYIT went with Catmull including Alvy Ray Smith, David DiFrancesco, Tom Duff and Ralph Guggenheim. Later however, others would take up the opportunity. Slowly the computer graphics lab started to fall apart and ceased to be the center of computer graphics research. The focus had shifted to Lucasfilm and a new graphics department at Cornell University. Over the next 15 years, Lucasfilm would be nominated for over 20 Academy Awards, winning 12 Oscars, five Technical Achievement Awards and two Emmys.

Looking back at NYIT, Catmull reflects "Alex Schure funded five years of great research work, and he deserves credit for that. We published a lot of papers, and were very open about our research, allowing people to come on tours and see our work. However now there are a lot of lawsuits going on, mainly because we didn't patent very much. People then subsequently acquired patents on that work and now we are called in frequently to show that we had done the work prior to other people."

Catmull continues, "We really had a major group of talented people in the lab, and the whole purpose was to do research and development for animation. We were actually quite stable for a long time, that first five years until I left. However, the primary issue was to make a feature film, and to do that you have to gather a lot of different kinds of skills; Artistic, Editorial, etc.. Unfortunately, the managers of the school did not understand this. They appreciated the technical capabilities. So as a group we where well taken care of, but we all recognized that in order to produce a feature film we had to have another kind of person there, movie people, and basically those people weren't brought into the school. We were doing the R & D but we just could not achieve our goals there. So when Lucas came along, and proved that he did have those kind of capabilities and said I want additional development in this area (of computer graphics), we jumped at it."

Thus in 1979 George Lucas formed the new computer graphics division of Lucasfilm to create computer imagery for motion pictures. Catmull became vice president and during the next six years, this new group would assemble one of the most talented teams of artists and programmers in the computer graphics industry. The advent of Lucasfilm's computer graphics department is viewed by many as another major milestone in the history of computer graphics. Here the researchers had access to funds, but at the same time they were working under a serious movie maker with real, definite goals.

The ACM in 1976 allowed for the first time, exhibitors in the annual SIGGRAPH conference. This turned up 10 companies who exhibited their products. By 1993 this would grow to 275 companies with over 30,000 attendees.

Systems Simulation Ltd. (SSL) of London created an interesting computer graphics sequence for the movie "Alien" in 1976. The scene called for a computer-assisted landing sequence where the terrain was viewed as a 3D wireframe. Initially a polystyrene landscape was going to be digitized to create the terrain. However, the terrain needed to be very rugged & complex and would have made a huge database if digitized. Alan Sutcliffe of SSL decided to write a program to generate the mountains at random. The result was a very convincing mountain terrain displayed in wireframe with the hidden lines removed. This was typical of early efforts at using computer generated imagery (CGI) in motion pictures, using it to simulate advanced computers in Sci-Fi movies.

Meanwhile the Triple I team was busy in 1976 working on "Westworld's" sequel, "Futureworld." In this film, robot Samurai warriors needed to materialize into a vacuum chamber. To accomplish this, Triple I digitized still photographs of the warriors and then used some image processing techniques to manipulate the digitized images and make the warriors materialize over the background. Triple I developed some custom film scanners and recorders for working on films in high resolutions, up to 2,500 lines. Also in that same year at the Jet Propulsion Laboratory in Pasadena, California (before going to NYIT), James Blinn developed a new technique similar to Texture Mapping. However, instead of simply mapping the colors from a 2D image onto a 3D object, the colors were used to make the surface appear as if it had a dent or a bulge. To do this, a monochrome image is used where the white areas of the image will appear as bulges and the black areas of the image will appear as dents. Any shades of gray are treated as smaller bumps or bulges depending on how dark or how light the shade of gray is. This form of mapping is called Bump Mapping.

Bump maps can add a new level of realism to 3D graphics by simulating a rough surface. When both a texture map and a bump map are applied at the same time, the result can be very convincing. Without bump maps, a 3D object can look very flat and un-interesting.

Busy Blinn also published a paper in that same year on creating surfaces that reflect their surroundings. This is accomplished by rendering six different views from the location of the object (top, bottom, front, back, left and right). Those views are then applied to the outside of the object in a way similar to standard texture mapping. The result is that an object appears to reflect its surroundings. This type of mapping is called environment mapping.

In December of 1977, a new magazine debuted called Computer Graphics World. Back then the major stories involving computer graphics revolved around 2D drafting, remote sensing, IC design, military simulation, medical imaging and business graphics. Today, some 17 years later, CGW continues to be the primary medium for computer graphics related news and reviews. Computer graphics hardware was still prohibitively expensive at this time. The National Institute of Health paid 65,000 dollars for their first frame buffer back in 1977. It had a resolution of 512x512 with 8 bits of color depth. Today a video adapter with the same capabilities can be purchased for under 100 dollars.

During the late 1970's Don Greenberg at Cornell University created a computer graphics lab that produced new methods of simulating realistic surfaces. Rob Cook at Cornell realized that the lighting model everyone had been using best approximated plastic. Cook wanted to create a new lighting model that allowed computers to simulate objects like polished metal. This new model takes into account the energy of the light source rather than the light's intensity or brightness.

As the second decade of computer graphics drew to a close the industry was showing tremendous growth. In 1979, IBM released its 3279 color terminal and within 9 months over 10,000 orders had been placed for it. By 1980, the entire value of all the computer graphics systems, hardware, and services would reach a billion dollars.

source : http://hem.passagen.se

HISTORY OF COMPUTER GRAPHICS 1960-69

The advance in computer graphics was to come from one MIT student, Ivan Sutherland. In 1961 Sutherland created another computer drawing program called Sketchpad. Using a light pen, Sketchpad allowed one to draw simple shapes on the computer screen, save them and even recall them later. The light pen itself had a small photoelectric cell in its tip. This cell emitted an electronic pulse whenever it was placed in front of a computer screen and the screen's electron gun fired directly at it. By simply timing the electronic pulse with the current location of the electron gun, it was easy to pinpoint exactly where the pen was on the screen at any given moment. Once that was determined, the computer could then draw a cursor at that location.

Sutherland seemed to find the perfect solution for many of the graphics problems he faced. Even today, many standards of computer graphics interfaces got their start with this early Sketchpad program. One example of this is in drawing constraints. If one wants to draw a square for example, s/he doesn't have to worry about drawing four lines perfectly to form the edges of the box. One can simply specify that s/he wants to draw a box, and then specify the location and size of the box. The software will then construct a perfect box, with the right dimensions and at the right location. Another example is that Sutherland's software modeled objects - not just a picture of objects. In other words, with a model of a car, one could change the size of the tires without affecting the rest of the car. It could stretch the body of the car without deforming the tires.

These early computer graphics were Vector graphics, composed of thin lines whereas modern day graphics are Raster based using pixels. The difference between vector graphics and raster graphics can be illustrated with a shipwrecked sailor. He creates an SOS sign in the sand by arranging rocks in the shape of the letters "SOS." He also has some brightly colored rope, with which he makes a second "SOS" sign by arranging the rope in the shapes of the letters. The rock SOS sign is similar to raster graphics. Every pixel has to be individually accounted for. The rope SOS sign is equivalent to vector graphics. The computer simply sets the starting point and ending point for the line and perhaps bend it a little between the two end points. The disadvantages to vector files are that they cannot represent continuous tone images and they are limited in the number of colors available. Raster formats on the other hand work well for continuous tone images and can reproduce as many colors as needed.

Also in 1961 another student at MIT, Steve Russell, created the first video game, Spacewar. Written for the DEC PDP-1, Spacewar was an instant success and copies started flowing to other PDP-1 owners and eventually even DEC got a copy. The engineers at DEC used it as a diagnostic program on every new PDP-1 before shipping it. The sales force picked up on this quickly enough and when installing new units, would run the world's first video game for their new customers.

E. E. Zajac, a scientist at Bell Telephone Laboratory (BTL), created a film called "Simulation of a two-giro gravity attitude control system" in 1963. In this computer generated film, Zajac showed how the attitude of a satellite could be altered as it orbits the Earth. He created the animation on an IBM 7090 mainframe computer. Also at BTL, Ken Knowlton, Frank Sindon and Michael Noll started working in the computer graphics field. Sindon created a film called Force, Mass and Motion illustrating Newton's laws of motion in operation. Around the same time, other scientists were creating computer graphics to illustrate their research. At Lawrence Radiation Laboratory, Nelson Max created the films, "Flow of a Viscous Fluid" and "Propagation of Shock Waves in a Solid Form." Boeing Aircraft created a film called "Vibration of an Aircraft."

It wasn't long before major corporations started taking an interest in computer graphics. TRW, Lockheed-Georgia, General Electric and Sperry Rand are among the many companies that were getting started in computer graphics by the mid 1960's. IBM was quick to respond to this interest by releasing the IBM 2250 graphics terminal, the first commercially available graphics computer.

Ralph Baer, a supervising engineer at Sanders Associates, came up with a home video game in 1966 that was later licensed to Magnavox and called the Odyssey. While very simplistic, and requiring fairly inexpensive electronic parts, it allowed the player to move points of light around on a screen. It was the first consumer computer graphics product.

Also in 1966, Sutherland at MIT invented the first computer controlled head-mounted display (HMD). Called the Sword of Damocles because of the hardware required for support, it displayed two separate wireframe images, one for each eye. This allowed the viewer to see the computer scene in stereoscopic 3D. After receiving his Ph.D. from MIT, Sutherland became Director of Information Processing at ARPA (Advanced Research Projects Agency), and later became a professor at Harvard.

Dave Evans was director of engineering at Bendix Corporation's computer division from 1953 to 1962. After which he worked for the next five years as a visiting professor at Berkeley. There he continued his interest in computers and how they interfaced with people. In 1968 the University of Utah recruited Evans to form a computer science program, and computer graphics quickly became his primary interest. This new department would become the world's primary research center for computer graphics.

In 1967 Sutherland was recruited by Evans to join the computer science program at the University of Utah. There he perfected his HMD. Twenty years later, NASA would re-discover his techniques in their virtual reality research. At Utah, Sutherland and Evans were highly sought after consultants by large companies but they were frustrated at the lack of graphics hardware available at the time so they started formulating a plan to start their own company.

A student by the name of Ed Catmull got started at the University of Utah in 1970 and signed up for Sutherland's computer graphics class. Catmull had just come from The Boeing Company and had been working on his degree in physics. Growing up on Disney, Catmull loved animation yet quickly discovered that he didn't have the talent for drawing. Now Catmull (along with many others) saw computers as the natural progression of animation and they wanted to be part of the revolution. The first animation that Catmull saw was his own. He created an animation of his hand opening and closing. It became one of his goals to produce a feature length motion picture using computer graphics. In the same class, Fred Parkes created an animation of his wife's face. Because of Evan's and Sutherland's presence, UU was gaining quite a reputation as the place to be for computer graphics research so Catmull went there to learn 3D animation.

As the UU computer graphics laboratory was attracting people from all over, John Warnock was one of those early pioneers; he would later found Adobe Systems and create a revolution in the publishing world with his PostScript page description language. Tom Stockham led the image processing group at UU which worked closely with the computer graphics lab. Jim Clark was also there; he would later found Silicon Graphics, Inc.

The first major advance in 3D computer graphics was created at UU by these early pioneers, the hidden-surface algorithm. In order to draw a representation of a 3D object on the screen, the computer must determine which surfaces are "behind" the object from the viewer's perspective, and thus should be "hidden" when the computer creates (or renders) the image.

source : http://hem.passagen.se

Tuesday, April 28, 2009

History of Computers

The Idea that Changed the World

The creation of the modern computer has changed the face of the planet. Today, there are more devices fitted with a microchip than there are human beings. The idea of a “computer” cannot be attributed to a single person. Rather, it has been the combined contribution of many innovative and forward-thinking scientists, mathematicians, philosophers and engineers that has brought us to what we now refer to as “the computer age”. This is their story…

It is no coincidence that the decimal number system rolls over after a count of 10. During a time when numbers did not yet exist, fingers, along with twigs and pebbles, were the most convenient way of tracking quantities. Around 500 B.C., the Babylonians made advances in accounting and devised the abacus — a small contraption with a few rods and free-moving beads. This device operated by accepting data (quantities), a set of instructions (add or subtract) and in return provided an answer — it captured the essence of a computer. The abacus is an example of the earliest known computing device: a primitive calculator that possesses some sort of innate intelligence allowing it to translate instructions into meaningful answers.

A century later, the Arabs invented the decimal numbering system — the basic language of mathematics. Armed with this language, it became possible to learn the answers to more complex problems such as 100+100 without having to visualise 200 pebbles. It also gave way to pre-computed answers in the form of lookup tables — the most trivial form of computational engineering. Lookup tables such as the ones used for multiplication can instantaneously provide answers at a glance without any mental effort.

The model of the abacus integrated the knowledge of the decimal number system and evolved into a mechanical calculator. During the sixteenth century, Leonardo Da Vinci, master painter and inventor, designed an adding machine that was a complex clockwork of mechanical cogs and rods with a few dozen buttons for adding and subtracting numbers. Though clunky, the machine was a testament of engineered design. The design went undiscovered until 1967 when it was found in Da Vinci’s cryptic notebooks transcribed in his usual mirror script.

Unfortunately, the adding machine performed only one mathematical function at a time. Charles Babbage devoted his life’s work to remedy that. Starting in 1837, Babbage worked on the design of a general purpose computer called the Analytical Engine. His efforts eventually earned him the title “father of computing”.

A working model of the Analytical Engine never materialised due to financial constraints but the design appeared to be sound. Were it ever to have been constructed, the Analytical Engine would have been some 30 metres long and 10 metres wide, powered by a steam engine and accept not only data (for example, the dimensions of a triangle) but also programs or functions (for computing the area of that triangle). The input would be fed through hole-punched cards — an idea borrowed from the textile industry which used such cards for automatically guiding patterns in weaving machines. The results were designed to be output through a curve plotter or a bell. The Analytical Engine was the first programmable computer and as such, it laid out the fundamental principles found in any modern computer: input, a program, a data store for storing intermediary answers, output and an arithmetic unit that performed the basic functions underlying all computation (add, subtract, divide and multiply).

Babbage’s principles found a purpose in the 1890s. The US Census Board realised that manually counting the results of the current census would take more than 10 years, by which time another census would be due. As part of a competition to come up with a solution, Herman Hollerith, an employee of the census department devised a machine that took in punch cards similar to Babbage’s Analytical Engine and added the results for over 6.2 million people in only six weeks. The idea for the system came to Hollerith from railroad operators who punched tickets in a certain manner to indicate whether the ticket holder was tall, dark, male, et cetera.

Mechanical computational engines continued to evolve and devices such as chronometers or watches became the marvel of mechanical orchestration and miniaturisation. Containing anywhere from a few dozen to a few hundred moving parts comprising clutches, cogs, gears, springs, coils and so on, these contraptions could keep a heartbeat for years or even decades with millisecond precision. The complex orchestration of these parts also explains the higher price tags on some of the more sophisticated of these watches, compared to the much cheaper digital counterparts of the modern era. Mechanical contraptions, however, have a physical limit to how small they can get and succumb to the number and size of parts, friction, weight, portability, power requirements and precision.

Fortunately, science in typical fashion made a leap during the mid-1800s. Thomas Alva Edison’s pioneering work in the field of electricity allowed it to be harnessed for practical use for the first time. With the control of electricity came the radio, the light bulb, wires and other invaluable electrical inventions.

As physics paved the way for electrical innovation, scientists discovered in electrical charge a way to represent data. The beads of the abacus were replaced by bits in the modern computer — essentially a bit or ‘binary digit’ is a small electrical charge that represents a 1 or 0. The creation of the bit marked a transition from the decimal system for humans (10 primary numbers from zero to nine) to a binary system for computers (only two numbers, zero and one).

Binary arithmetic provided the foundation for operating with bits. It was the contribution of Gottfried Leibniz, a prodigy in mathematics, philosophy, linguistics, law, politics and logic. In fact, he posited that every argument could be deduced to numbers using binary logic. Hence, 1s and 0s in binary arithmetic are also referred to as “true” and “false” (or “on” and “off” due to their application in electronic switches).

George Boole, a mathematician and philosopher, relied on binary arithmetic to advance his theories of logic, at the time still a branch of philosophy. The field would later evolve into Boolean algebra for managing and calculating the outcome of a number of bits interacting with each other. The simplest of Boolean logic might take the following form: a light switch toggles a light bulb — flipping the switch turns the light on if, and only if, it’s off, and vice versa. In the modern computer, however, a hybrid of few million such switches are attached to any single circuit and flipping a combination of these switches can achieve results that can only be managed using the techniques of Boolean algebra.

At its core, a computer is doing just that, switching a galaxy of bits on or off. A ballet of bits is constantly playing out and each flip of the switch results in a domino-like chain reaction. Each ballet of the bits is used to represent an outcome and must be orchestrated with absolute accuracy.

During a 3D game for example, the tiniest movement of the mouse turns the ball, which turns a wheel that is being monitored by a chip whose purpose is translating this movement into an electronic signal. The chip changes a few thousand bits and causes a chain reaction down the wire of the mouse connected to the computer. The reaction eventually ends up in the computer’s main processor, which in turn tells the graphics processor that the mouse has moved one millimetre. The graphics card does a few thousand mathematical computations to calculate the shadow, lighting, shading and angle of light, and generates a new image corresponding to the movement of the mouse. While it does all this, the memory in the computer does the job of remembering the previous position based on which the next image is calculated. The graphics card renders the new image on the monitor by changing the state of a few billion bits on the screen and producing a massive collage of a few million pixels — all this within a fraction of a second.

The language of bits was not always the language of choice. The idea came from the early days of telephone companies when they used switches with “on” and “off” states to connect or disconnect a circuit. Human operators made the connections by manually operating the switches. For long distance calls, a local operator connected to a foreign telephone exchange, which in turn connected to its own local exchange and created a link between the calling parties. A computer uses the same principle of using switches to control bits and direct the flow of information.

In his 1937 Masters degree thesis at the Massachusetts Institute of Technology, Claude Shannon proved that management of a large number of these switches could be simplified using Boolean algebra. Inversely, he also proved that switches could be used to solve Boolean algebraic problems. This meant that if a set of bits interacted in a particular way, they would magically result in the answer — in the same way that mixing red and green paint results in yellow.

How this magical interaction happened was the pioneering work of Alan Turing, father of modern computing. In 1936, a year before Shannon’s thesis, Turing laid out the fundamental theoretical model for all modern computers by detailing the Turing Machine. Its basic idea is quite simple: in a perfectly well choreographed ballet, for example, a dancer does not need to keep track of the entire ballet. Instead, she may need to keep track of only a few simple cues: step forward if dancer to the left steps back; spin synchronously with the lead dancer; stop dancing when the dancer in front stops dancing. Each dancer in the ballet follows their own set of cues, which creates a chain reaction among other dancers. The ballet is initiated (or brought to a halt) by a single dancer responsible for starting the chain reaction.

Similarly, bits react to cues and influence each other. When the ballet of bits concludes, the new state of bits (for example, 111, 001 or 010) represent different results. Turing’s contribution is remarkable due to the nature of the pioneering work and his thought experiments that led him to develop such a system.

Turing’s work added to centuries of advances and breakthroughs in engineering, mathematics, physics, logic and an endless pursuit of human spirit that would manifest themselves in the form of a 30-tonne machine called the Electronic Numerical Integrator And Computer (ENIAC).

The ENIAC was the first fully programmable machine capable of solving almost any mathematical problem. Built by the US Army in 1946, the ENIAC was capable of adding 5,000 numbers per second. It was powered by 18,000 vacuum tubes, 6,000 switches, around five million hand soldered joints and took three years to build. This marvel of a machine, however, was specifically programmed to calculate in a matter of hours the trajectory of artilleries to hit enemy targets. This was a task that would otherwise have taken days to compute.

The vacuum tubes used in the ENIAC were vaguely similar to light bulbs in both function and form but with metal casings instead of glass. These vacuum tubes functioned to represent data using electrical charge. However, they were problematic at the same time and kept fusing out. The heat and other lights on the ENIAC computer attracted a lot of moths which in turn caused a lot of short circuiting. Computer problems henceforth came to be known as “bugs” and fixing them, “debugging”. Due to these problems the ENIAC could sometimes be down for half a day at a time and required a lot of hands to keep it up and running.

While the input data could be stored on the ENIAC, the program to operate on the input had to be wired through plug board wiring. Programming it was cumbersome and each program required unplugging and re-plugging hundreds of wires. This method of programming was almost as primitive as Babbage’s punch cards. And the limitation meant that computers, although programmable, were restricted by the complexity of the process.

It was the mathematician John von Neumann who, shortly after the ENIAC, introduced the concept of a stored-program computer. Storing the program in the computer memory meant that the system of semi-permanent plug board wiring on the ENIAC could be deprecated. Bits would represent not only data, but also the programs themselves which consumed the data — bits controlled by bits.

The stored-program design had profound implications. Prior to this breakthrough, computers accepted normal input and passed it on to programs which operated on it. However, if the program itself was an input, then operating on this program would require another master program. Turing’s Universal Machine described such a master program and von Neumann provided the implementation which has now become the model for nearly all computers.

Even with the programmable architecture well in place, it was doubtful if vacuum tubes would allow computers to scale. These deficient vacuum tubes set the backdrop for the most important invention of the digital age: the transistor, for which its three co-inventors William Shockley, John Bardeen and Walter Brattain went on to receive the Nobel Prize in 1956.

Transistors are microscopically small in contrast to the finger-sized vacuum tubes, require lesser power and are capable of switching states (1 to 0 and 0 to 1) much faster. Their beauty also lies in their composition: as solid-state semiconductors, they are built from material that has the ability to conduct electrical charge, like metal, or block it, like rubber.

To deliver on the promise of transistors, Shockley would go on to head the Shockley Semiconductor Laboratory in Northern California with his colleagues Walter Brattain and John Bardeen. The two eventually left Shockley due to his paranoid and competitive nature (once, an employee cut her finger which Shockley suspected was actually a plot targeted toward him and to find the culprit, forced a lie detector test upon all his employees).

Along with Bardeen and Brattain, six other scientists quit Shockley Semiconductor. These “treacherous eight” — as Shockley referred to them — went on to form Fairchild Semiconductor in the same region and adapted the more abundant silicon as the semiconducting material of choice. This marked the beginnings of the Silicon Valley which today is the epicenter of computers and high-tech businesses.

Transistors which represent the bits in a computer needed to be wired together for interaction. Common configurations of wiring came together as integrated circuits or microchips, the first of which was invented by Robert Noyce, one of the “treacherous eight”. If transistors are characters of the alphabet, microchips are the words formed by those alphabets and computers are the composition of dozens of these microchips. All digital electronic devices are composed of microchips with many of them sharing the same common subset of chips.

Robert Noyce, along with Gordon Moore would go on to form Integrated Electronics, now better known as Intel. It was at Intel that he oversaw the work of Ted Hoff who invented the greatest microchip of them all. The microprocessor or the Central Processing Unit (CPU) found in all personal computers (PCs) is a single, highly complex microchip that functions as the brain.

Co-founder Gordon Moore meanwhile gained notoriety for his speculation that the number of transistors on a microprocessor would double every two years. Moore’s speculation became Moore’s Law and has held up since it was first posited in 1965. Current Intel Pentium 4 processors have the muscle of over 100 million transistors fitted inside a matchbox-size chip that is capable of adding over 5,000 million numbers per second. Contrast this with the 17,000 vacuum tubes in the 30-tonne ENIAC which could add only 5,000 numbers per second and the significance of transistor technology becomes clear. If the Greeks had an Intel Pentium 4, they could have saved themselves centuries of mathematical labouring.

Intel processors started their legacy in 1975, by powering the first commercial personal computer, the MITS Altair, with an Intel 8800 processor. Microsoft founders Bill Gates and Paul Allen would go on to develop Altair BASIC, its first programming language. Interestingly enough, in the same year, Advanced Micro Devices (AMD) — also formed by a group of Fairchild defectors — reverse engineered the Intel 8800 processor and started the long running Intel-AMD rivalry.

While the Altair was being sold as a hobbyist kit, the Apple I was the first fully assembled computer developed around the same time by hobbyist Steve Wozniak and sold with the help of close friend, Steve Jobs. The two subsequently founded Apple Computers in Jobs’ family garage. Today, 30 years later, Jobs serves as the visionary and CEO of Apple Computers Incorporated.

Developed in 1973, it was the non-commercial Xerox Alto, however, that took the title for first personal computer. The Alto, developed at Xerox PARC (Palo Alto Research Center) in Palo Alto, California, was one of a dozen inventions to come out of the research centre including colour graphics, object oriented programming and wide application of the mouse. After seeing a demo of the Alto, Apple engineers purportedly adopted the concept for their own commercial computer Lisa, which eventually proved to be too expensive and ahead of its time. The lack of commercial demand meant that over 2,000 Lisas would need to be buried in a landfill.

Contrary to IBM chief Thomas Watson’s speculation in 1943 that “there is a world market for maybe five computers,” personal computers found widespread demand in a growing market that has today reached nearly two billion units. This figure primarily represents PCs, but its siblings and cousins (cellphones, PDAs, laptops) far exceed the population of even humans on this planet. Whether in the form GPS tracking devices, rain-sensing windshield wipers or electronic hearing implants, microchips continue to shrink and integrate into our lives.

While the hunger for more powerful and smaller chips is insatiable, Moore’s Law seems to be giving away as the current generation of microprocessors are showing signs of plateauing. Even though the natural laws of physics dictate that bits can be as small as the atoms in which they are stored, we are far from reaching this atomic threshold. The problem lies in the economics of miniaturisation as increasingly expensive fabrication plants for producing smaller chips yield disproportionately diminishing returns. Nonetheless, all hope is not yet lost as scientists are already exploring the frontiers of sub-atomic particles.

Atoms are composed of a set of protons and neutrons orbiting around a nucleus. Removing protons or neutrons changes the charge of the atom to a negative (0) or positive (1), allowing them to act as bits. These electrons and neutrons are in turn made up of three quarks each. Understanding the nature of these quarks and their influence on neutrons and protons will unlock the power to make today’s most powerful supercomputers pale in comparison. If these quantum computers ever materialise, they will in theory be able to compute in a matter of days what would by today’s computing ability take a few million years.

While the shape, form and power of computing devices continues to evolve, a parallel evolution has been taking place in the related field of communication technology.

The first electronic telegraphs (including wireless) were already communicating in 1832, a century prior to the ENIAC. George Stibitz, a researcher at Bell Labs during the 1930s and 1940s, used a teletypewriter (essentially a typewriter hooked up to a telephone line) to communicate with a calculator on the other end and receive results for remote computation. This was the first time a computer had ever been operated remotely over a phone line.

The US Department of Defense, Advanced Research Projects Agency (DARPA) duly noted the missing link in computers and initiated efforts to fill the void. Around 1962, a series of memos about the “Galactic Network” laid the conceptual foundations of the internet. Shortly thereafter, Vinton Cerf received a “request for proposal” from DARPA to design a packet switched network. Cerf’s research efforts lent itself heavily to the design of the first network of computers and earned him the title “father of the internet”.

The resilience of the internet derives from the packet switched network Cerf detailed. In such a model, all information is divided into tiny packets. Each of these packets is transmitted separately and embarks on a journey to find their destination on the internet. Their only strategy to get to the destination is to ask intermediate routers (who conduct traffic on the internet) for directions to the next router that might lead the way and so on until the last router points them to their final destination. Anyone who has ever gotten lost and asked for directions can probably relate to a packet. For these packets, a dozen things can and do go wrong. They often get lost in transit, are captured by a hacker or arrive at their destination out of order with other packets.

The research and prototyping for refining the packet switched network began at the University of California at Los Angeles (UCLA) where Cerf was doing graduate work. By 1969 the Advanced Research Projects Agency Network (ARPANET) would take shape as UCLA, University of California at Santa Barbara, Stanford Research Institute and University of Utah came together to form a network.

Along with the contributions of dozens of other individuals, Cerf would go on to develop the Transmission Control Protocol (TCP) in his new home at Stanford where he had taken up assistant professorship in computer science and electrical engineering. After four iterations, the TCP suite was finalised in 1978 following an exciting demonstration in July 1977 when a packet was sent on a 94,000 mile round trip on the ARPANET without losing a single bit. As a result of its resilient design and infastructure, TCP/IP (Internet Protocol) became the standard for transferring data across networks. Relying on TCP/IP, the ARPANET grew into the internet and has since continued to scale unchecked to become what it is today.

The internet was primarily used for transferring and sharing data. It handled documents but webpages as such did not exist until Tim Berners-Lee, an independent contractor at CERN, became frustrated with the lack of ability to easily share and update information between researchers. He transformed the internet landscape by introducing the concept of hyperlinks — the links on webpages that allow them to point to each other with the click of a mouse. These hyperlinks created a ‘global web’ of linked pages commonly referred to as the World Wide Web (WWW).

As far as communication networks go, the internet overshadows the telephony network, integrates the television, radio and newspapers and challenges even our physical social realm. Its humble beginnings ultimately brought the communication revolution to all its glory not only for humans but also for devices.

Through microchips, electronic devices became aware of their own function. A chip acting as the brain inside a cellphone encodes every bit of relevant information about the host. Relying on exacting communication protocols, these devices suddenly become aware of the existence of other devices made up of similar microchips and can speak to them in a similar language.

This new species of electronic beings are continually evolving and trying to overcome their cultural differences so that an alarm clock can talk to the coffee maker in the morning or a health monitor can check our vital statistics during recovery. These modern-day slaves encapsulate tiny worker atoms which manage for us what our preoccupied minds rather not. The quality of life during our brief welcome on this planet has been elevated because of them and in return for doing everything they are told, they ask for nothing. If we are God’s creatures, then computers are ours: a manifestation of the human spirit and potential.

Article source : http://aleembawany.com

History of Computers

The Idea that Changed the World

The creation of the modern computer has changed the face of the planet. Today, there are more devices fitted with a microchip than there are human beings. The idea of a “computer” cannot be attributed to a single person. Rather, it has been the combined contribution of many innovative and forward-thinking scientists, mathematicians, philosophers and engineers that has brought us to what we now refer to as “the computer age”. This is their story…

It is no coincidence that the decimal number system rolls over after a count of 10. During a time when numbers did not yet exist, fingers, along with twigs and pebbles, were the most convenient way of tracking quantities. Around 500 B.C., the Babylonians made advances in accounting and devised the abacus — a small contraption with a few rods and free-moving beads. This device operated by accepting data (quantities), a set of instructions (add or subtract) and in return provided an answer — it captured the essence of a computer. The abacus is an example of the earliest known computing device: a primitive calculator that possesses some sort of innate intelligence allowing it to translate instructions into meaningful answers.

A century later, the Arabs invented the decimal numbering system — the basic language of mathematics. Armed with this language, it became possible to learn the answers to more complex problems such as 100+100 without having to visualise 200 pebbles. It also gave way to pre-computed answers in the form of lookup tables — the most trivial form of computational engineering. Lookup tables such as the ones used for multiplication can instantaneously provide answers at a glance without any mental effort.

The model of the abacus integrated the knowledge of the decimal number system and evolved into a mechanical calculator. During the sixteenth century, Leonardo Da Vinci, master painter and inventor, designed an adding machine that was a complex clockwork of mechanical cogs and rods with a few dozen buttons for adding and subtracting numbers. Though clunky, the machine was a testament of engineered design. The design went undiscovered until 1967 when it was found in Da Vinci’s cryptic notebooks transcribed in his usual mirror script.

Unfortunately, the adding machine performed only one mathematical function at a time. Charles Babbage devoted his life’s work to remedy that. Starting in 1837, Babbage worked on the design of a general purpose computer called the Analytical Engine. His efforts eventually earned him the title “father of computing”.

A working model of the Analytical Engine never materialised due to financial constraints but the design appeared to be sound. Were it ever to have been constructed, the Analytical Engine would have been some 30 metres long and 10 metres wide, powered by a steam engine and accept not only data (for example, the dimensions of a triangle) but also programs or functions (for computing the area of that triangle). The input would be fed through hole-punched cards — an idea borrowed from the textile industry which used such cards for automatically guiding patterns in weaving machines. The results were designed to be output through a curve plotter or a bell. The Analytical Engine was the first programmable computer and as such, it laid out the fundamental principles found in any modern computer: input, a program, a data store for storing intermediary answers, output and an arithmetic unit that performed the basic functions underlying all computation (add, subtract, divide and multiply).

Babbage’s principles found a purpose in the 1890s. The US Census Board realised that manually counting the results of the current census would take more than 10 years, by which time another census would be due. As part of a competition to come up with a solution, Herman Hollerith, an employee of the census department devised a machine that took in punch cards similar to Babbage’s Analytical Engine and added the results for over 6.2 million people in only six weeks. The idea for the system came to Hollerith from railroad operators who punched tickets in a certain manner to indicate whether the ticket holder was tall, dark, male, et cetera.

Mechanical computational engines continued to evolve and devices such as chronometers or watches became the marvel of mechanical orchestration and miniaturisation. Containing anywhere from a few dozen to a few hundred moving parts comprising clutches, cogs, gears, springs, coils and so on, these contraptions could keep a heartbeat for years or even decades with millisecond precision. The complex orchestration of these parts also explains the higher price tags on some of the more sophisticated of these watches, compared to the much cheaper digital counterparts of the modern era. Mechanical contraptions, however, have a physical limit to how small they can get and succumb to the number and size of parts, friction, weight, portability, power requirements and precision.

Fortunately, science in typical fashion made a leap during the mid-1800s. Thomas Alva Edison’s pioneering work in the field of electricity allowed it to be harnessed for practical use for the first time. With the control of electricity came the radio, the light bulb, wires and other invaluable electrical inventions.

As physics paved the way for electrical innovation, scientists discovered in electrical charge a way to represent data. The beads of the abacus were replaced by bits in the modern computer — essentially a bit or ‘binary digit’ is a small electrical charge that represents a 1 or 0. The creation of the bit marked a transition from the decimal system for humans (10 primary numbers from zero to nine) to a binary system for computers (only two numbers, zero and one).

Binary arithmetic provided the foundation for operating with bits. It was the contribution of Gottfried Leibniz, a prodigy in mathematics, philosophy, linguistics, law, politics and logic. In fact, he posited that every argument could be deduced to numbers using binary logic. Hence, 1s and 0s in binary arithmetic are also referred to as “true” and “false” (or “on” and “off” due to their application in electronic switches).

George Boole, a mathematician and philosopher, relied on binary arithmetic to advance his theories of logic, at the time still a branch of philosophy. The field would later evolve into Boolean algebra for managing and calculating the outcome of a number of bits interacting with each other. The simplest of Boolean logic might take the following form: a light switch toggles a light bulb — flipping the switch turns the light on if, and only if, it’s off, and vice versa. In the modern computer, however, a hybrid of few million such switches are attached to any single circuit and flipping a combination of these switches can achieve results that can only be managed using the techniques of Boolean algebra.

At its core, a computer is doing just that, switching a galaxy of bits on or off. A ballet of bits is constantly playing out and each flip of the switch results in a domino-like chain reaction. Each ballet of the bits is used to represent an outcome and must be orchestrated with absolute accuracy.

During a 3D game for example, the tiniest movement of the mouse turns the ball, which turns a wheel that is being monitored by a chip whose purpose is translating this movement into an electronic signal. The chip changes a few thousand bits and causes a chain reaction down the wire of the mouse connected to the computer. The reaction eventually ends up in the computer’s main processor, which in turn tells the graphics processor that the mouse has moved one millimetre. The graphics card does a few thousand mathematical computations to calculate the shadow, lighting, shading and angle of light, and generates a new image corresponding to the movement of the mouse. While it does all this, the memory in the computer does the job of remembering the previous position based on which the next image is calculated. The graphics card renders the new image on the monitor by changing the state of a few billion bits on the screen and producing a massive collage of a few million pixels — all this within a fraction of a second.

The language of bits was not always the language of choice. The idea came from the early days of telephone companies when they used switches with “on” and “off” states to connect or disconnect a circuit. Human operators made the connections by manually operating the switches. For long distance calls, a local operator connected to a foreign telephone exchange, which in turn connected to its own local exchange and created a link between the calling parties. A computer uses the same principle of using switches to control bits and direct the flow of information.

In his 1937 Masters degree thesis at the Massachusetts Institute of Technology, Claude Shannon proved that management of a large number of these switches could be simplified using Boolean algebra. Inversely, he also proved that switches could be used to solve Boolean algebraic problems. This meant that if a set of bits interacted in a particular way, they would magically result in the answer — in the same way that mixing red and green paint results in yellow.

How this magical interaction happened was the pioneering work of Alan Turing, father of modern computing. In 1936, a year before Shannon’s thesis, Turing laid out the fundamental theoretical model for all modern computers by detailing the Turing Machine. Its basic idea is quite simple: in a perfectly well choreographed ballet, for example, a dancer does not need to keep track of the entire ballet. Instead, she may need to keep track of only a few simple cues: step forward if dancer to the left steps back; spin synchronously with the lead dancer; stop dancing when the dancer in front stops dancing. Each dancer in the ballet follows their own set of cues, which creates a chain reaction among other dancers. The ballet is initiated (or brought to a halt) by a single dancer responsible for starting the chain reaction.

Similarly, bits react to cues and influence each other. When the ballet of bits concludes, the new state of bits (for example, 111, 001 or 010) represent different results. Turing’s contribution is remarkable due to the nature of the pioneering work and his thought experiments that led him to develop such a system.

Turing’s work added to centuries of advances and breakthroughs in engineering, mathematics, physics, logic and an endless pursuit of human spirit that would manifest themselves in the form of a 30-tonne machine called the Electronic Numerical Integrator And Computer (ENIAC).

The ENIAC was the first fully programmable machine capable of solving almost any mathematical problem. Built by the US Army in 1946, the ENIAC was capable of adding 5,000 numbers per second. It was powered by 18,000 vacuum tubes, 6,000 switches, around five million hand soldered joints and took three years to build. This marvel of a machine, however, was specifically programmed to calculate in a matter of hours the trajectory of artilleries to hit enemy targets. This was a task that would otherwise have taken days to compute.

The vacuum tubes used in the ENIAC were vaguely similar to light bulbs in both function and form but with metal casings instead of glass. These vacuum tubes functioned to represent data using electrical charge. However, they were problematic at the same time and kept fusing out. The heat and other lights on the ENIAC computer attracted a lot of moths which in turn caused a lot of short circuiting. Computer problems henceforth came to be known as “bugs” and fixing them, “debugging”. Due to these problems the ENIAC could sometimes be down for half a day at a time and required a lot of hands to keep it up and running.

While the input data could be stored on the ENIAC, the program to operate on the input had to be wired through plug board wiring. Programming it was cumbersome and each program required unplugging and re-plugging hundreds of wires. This method of programming was almost as primitive as Babbage’s punch cards. And the limitation meant that computers, although programmable, were restricted by the complexity of the process.

It was the mathematician John von Neumann who, shortly after the ENIAC, introduced the concept of a stored-program computer. Storing the program in the computer memory meant that the system of semi-permanent plug board wiring on the ENIAC could be deprecated. Bits would represent not only data, but also the programs themselves which consumed the data — bits controlled by bits.

The stored-program design had profound implications. Prior to this breakthrough, computers accepted normal input and passed it on to programs which operated on it. However, if the program itself was an input, then operating on this program would require another master program. Turing’s Universal Machine described such a master program and von Neumann provided the implementation which has now become the model for nearly all computers.

Even with the programmable architecture well in place, it was doubtful if vacuum tubes would allow computers to scale. These deficient vacuum tubes set the backdrop for the most important invention of the digital age: the transistor, for which its three co-inventors William Shockley, John Bardeen and Walter Brattain went on to receive the Nobel Prize in 1956.

Transistors are microscopically small in contrast to the finger-sized vacuum tubes, require lesser power and are capable of switching states (1 to 0 and 0 to 1) much faster. Their beauty also lies in their composition: as solid-state semiconductors, they are built from material that has the ability to conduct electrical charge, like metal, or block it, like rubber.

To deliver on the promise of transistors, Shockley would go on to head the Shockley Semiconductor Laboratory in Northern California with his colleagues Walter Brattain and John Bardeen. The two eventually left Shockley due to his paranoid and competitive nature (once, an employee cut her finger which Shockley suspected was actually a plot targeted toward him and to find the culprit, forced a lie detector test upon all his employees).

Along with Bardeen and Brattain, six other scientists quit Shockley Semiconductor. These “treacherous eight” — as Shockley referred to them — went on to form Fairchild Semiconductor in the same region and adapted the more abundant silicon as the semiconducting material of choice. This marked the beginnings of the Silicon Valley which today is the epicenter of computers and high-tech businesses.

Transistors which represent the bits in a computer needed to be wired together for interaction. Common configurations of wiring came together as integrated circuits or microchips, the first of which was invented by Robert Noyce, one of the “treacherous eight”. If transistors are characters of the alphabet, microchips are the words formed by those alphabets and computers are the composition of dozens of these microchips. All digital electronic devices are composed of microchips with many of them sharing the same common subset of chips.

Robert Noyce, along with Gordon Moore would go on to form Integrated Electronics, now better known as Intel. It was at Intel that he oversaw the work of Ted Hoff who invented the greatest microchip of them all. The microprocessor or the Central Processing Unit (CPU) found in all personal computers (PCs) is a single, highly complex microchip that functions as the brain.

Co-founder Gordon Moore meanwhile gained notoriety for his speculation that the number of transistors on a microprocessor would double every two years. Moore’s speculation became Moore’s Law and has held up since it was first posited in 1965. Current Intel Pentium 4 processors have the muscle of over 100 million transistors fitted inside a matchbox-size chip that is capable of adding over 5,000 million numbers per second. Contrast this with the 17,000 vacuum tubes in the 30-tonne ENIAC which could add only 5,000 numbers per second and the significance of transistor technology becomes clear. If the Greeks had an Intel Pentium 4, they could have saved themselves centuries of mathematical labouring.

Intel processors started their legacy in 1975, by powering the first commercial personal computer, the MITS Altair, with an Intel 8800 processor. Microsoft founders Bill Gates and Paul Allen would go on to develop Altair BASIC, its first programming language. Interestingly enough, in the same year, Advanced Micro Devices (AMD) — also formed by a group of Fairchild defectors — reverse engineered the Intel 8800 processor and started the long running Intel-AMD rivalry.

While the Altair was being sold as a hobbyist kit, the Apple I was the first fully assembled computer developed around the same time by hobbyist Steve Wozniak and sold with the help of close friend, Steve Jobs. The two subsequently founded Apple Computers in Jobs’ family garage. Today, 30 years later, Jobs serves as the visionary and CEO of Apple Computers Incorporated.

Developed in 1973, it was the non-commercial Xerox Alto, however, that took the title for first personal computer. The Alto, developed at Xerox PARC (Palo Alto Research Center) in Palo Alto, California, was one of a dozen inventions to come out of the research centre including colour graphics, object oriented programming and wide application of the mouse. After seeing a demo of the Alto, Apple engineers purportedly adopted the concept for their own commercial computer Lisa, which eventually proved to be too expensive and ahead of its time. The lack of commercial demand meant that over 2,000 Lisas would need to be buried in a landfill.

Contrary to IBM chief Thomas Watson’s speculation in 1943 that “there is a world market for maybe five computers,” personal computers found widespread demand in a growing market that has today reached nearly two billion units. This figure primarily represents PCs, but its siblings and cousins (cellphones, PDAs, laptops) far exceed the population of even humans on this planet. Whether in the form GPS tracking devices, rain-sensing windshield wipers or electronic hearing implants, microchips continue to shrink and integrate into our lives.

While the hunger for more powerful and smaller chips is insatiable, Moore’s Law seems to be giving away as the current generation of microprocessors are showing signs of plateauing. Even though the natural laws of physics dictate that bits can be as small as the atoms in which they are stored, we are far from reaching this atomic threshold. The problem lies in the economics of miniaturisation as increasingly expensive fabrication plants for producing smaller chips yield disproportionately diminishing returns. Nonetheless, all hope is not yet lost as scientists are already exploring the frontiers of sub-atomic particles.

Atoms are composed of a set of protons and neutrons orbiting around a nucleus. Removing protons or neutrons changes the charge of the atom to a negative (0) or positive (1), allowing them to act as bits. These electrons and neutrons are in turn made up of three quarks each. Understanding the nature of these quarks and their influence on neutrons and protons will unlock the power to make today’s most powerful supercomputers pale in comparison. If these quantum computers ever materialise, they will in theory be able to compute in a matter of days what would by today’s computing ability take a few million years.

While the shape, form and power of computing devices continues to evolve, a parallel evolution has been taking place in the related field of communication technology.

The first electronic telegraphs (including wireless) were already communicating in 1832, a century prior to the ENIAC. George Stibitz, a researcher at Bell Labs during the 1930s and 1940s, used a teletypewriter (essentially a typewriter hooked up to a telephone line) to communicate with a calculator on the other end and receive results for remote computation. This was the first time a computer had ever been operated remotely over a phone line.

The US Department of Defense, Advanced Research Projects Agency (DARPA) duly noted the missing link in computers and initiated efforts to fill the void. Around 1962, a series of memos about the “Galactic Network” laid the conceptual foundations of the internet. Shortly thereafter, Vinton Cerf received a “request for proposal” from DARPA to design a packet switched network. Cerf’s research efforts lent itself heavily to the design of the first network of computers and earned him the title “father of the internet”.

The resilience of the internet derives from the packet switched network Cerf detailed. In such a model, all information is divided into tiny packets. Each of these packets is transmitted separately and embarks on a journey to find their destination on the internet. Their only strategy to get to the destination is to ask intermediate routers (who conduct traffic on the internet) for directions to the next router that might lead the way and so on until the last router points them to their final destination. Anyone who has ever gotten lost and asked for directions can probably relate to a packet. For these packets, a dozen things can and do go wrong. They often get lost in transit, are captured by a hacker or arrive at their destination out of order with other packets.

The research and prototyping for refining the packet switched network began at the University of California at Los Angeles (UCLA) where Cerf was doing graduate work. By 1969 the Advanced Research Projects Agency Network (ARPANET) would take shape as UCLA, University of California at Santa Barbara, Stanford Research Institute and University of Utah came together to form a network.

Along with the contributions of dozens of other individuals, Cerf would go on to develop the Transmission Control Protocol (TCP) in his new home at Stanford where he had taken up assistant professorship in computer science and electrical engineering. After four iterations, the TCP suite was finalised in 1978 following an exciting demonstration in July 1977 when a packet was sent on a 94,000 mile round trip on the ARPANET without losing a single bit. As a result of its resilient design and infastructure, TCP/IP (Internet Protocol) became the standard for transferring data across networks. Relying on TCP/IP, the ARPANET grew into the internet and has since continued to scale unchecked to become what it is today.

The internet was primarily used for transferring and sharing data. It handled documents but webpages as such did not exist until Tim Berners-Lee, an independent contractor at CERN, became frustrated with the lack of ability to easily share and update information between researchers. He transformed the internet landscape by introducing the concept of hyperlinks — the links on webpages that allow them to point to each other with the click of a mouse. These hyperlinks created a ‘global web’ of linked pages commonly referred to as the World Wide Web (WWW).

As far as communication networks go, the internet overshadows the telephony network, integrates the television, radio and newspapers and challenges even our physical social realm. Its humble beginnings ultimately brought the communication revolution to all its glory not only for humans but also for devices.

Through microchips, electronic devices became aware of their own function. A chip acting as the brain inside a cellphone encodes every bit of relevant information about the host. Relying on exacting communication protocols, these devices suddenly become aware of the existence of other devices made up of similar microchips and can speak to them in a similar language.

This new species of electronic beings are continually evolving and trying to overcome their cultural differences so that an alarm clock can talk to the coffee maker in the morning or a health monitor can check our vital statistics during recovery. These modern-day slaves encapsulate tiny worker atoms which manage for us what our preoccupied minds rather not. The quality of life during our brief welcome on this planet has been elevated because of them and in return for doing everything they are told, they ask for nothing. If we are God’s creatures, then computers are ours: a manifestation of the human spirit and potential.

Article source : http://aleembawany.com