ads

vendredi 22 janvier 2016

When was the first computer invented? nice history

When was the first computer invented?

There is no easy answer to this question due to the many different classifications of computers. The first mechanical computer, created by Charles Babbage in 1822, doesn't really resemble what most would consider a computer today. Therefore, this document has been created with a listing of each of the computer firsts, starting with the Difference Engine and leading up to the computers we use today.
Note: Early inventions which helped lead up to the computer, such as the abacus, calculator, and tablet machines, are not accounted for in this document.

The word "computer" was first used

The word "computer" was first recorded as being used in 1613 and originally was used to describe a human who performed calculations or computations. The definition of a computer remained the same until the end of the 19th century, when the industrial revolution gave rise to machines whose primary purpose was calculating.

First mechanical computer or automatic computing engine concept

In 1822, Charles Babbage conceptualized and began developing the Difference Engine, considered to be the first automatic computing machine. The Difference Engine was capable of computing several sets of numbers and making hard copies of the results. Babbage received some help with development of the Difference Engine from Ada Lovelace, considered by many to be the first computer programmer for her work and notes on the Difference Engine. Unfortunately, because of funding, Babbage was never able to complete a full-scale functional version of this machine. In June of 1991, the London Science Museum completed the Difference Engine No 2 for the bicentennial year of Babbage's birth and later completed the printing mechanism in 2000.
Analytical Engine 
In 1837, Charles Babbage proposed the first general mechanical computer, the Analytical Engine. The Analytical Engine contained an Arithmetic Logic Unit (ALU), basic flow control, and integrated memory and is the first general-purpose computer concept. Unfortunately, because of funding issues, this computer was also never built while Charles Babbage was alive. In 1910, Henry Babbage, Charles Babbage's youngest son, was able to complete a portion of this machine and was able to perform basic calculations.

First programmable computer

The Z1 was created by German Konrad Zuse in his parents' living room between 1936 and 1938. It is considered to be the first electro-mechanical binary programmable computer, and the first really functional modern computer.
Z1 computer

First concepts of what we consider a modern computer

The Turing machine was first proposed by Alan Turing in 1936 and became the foundation for theories about computing and computers. The machine was a device that printed symbols on paper tape in a manner that emulated a person following a series of logical instructions. Without these fundamentals, we wouldn't have the computers we use today.

The first electric programmable computer

Colossus Mark 2 
The Colossus was the first electric programmable computer, developed by Tommy Flowers, and first demonstrated in December 1943. The Colossus was created to help the British code breakers read encrypted German messages.

The first digital computer

Short for Atanasoff-Berry Computer, the ABC began development by Professor John Vincent Atanasoff and graduate student Cliff Berry in 1937. Its development continued until 1942 at the Iowa State College (now Iowa State University).
The ABC was an electrical computer that used vacuum tubes for digital computation, including binary math and Boolean logic and had no CPU. On October 19, 1973, the US Federal Judge Earl R. Larson signed his decision that the ENIAC patent by J. Presper Eckert and John Mauchly was invalid and named Atanasoff the inventor of the electronic digital computer.
The ENIAC was invented by J. Presper Eckert and John Mauchly at the University of Pennsylvania and began construction in 1943 and was not completed until 1946. It occupied about 1,800 square feet and used about 18,000 vacuum tubes, weighing almost 50 tons. Although the Judge ruled that the ABC computer was the first digital computer, many still consider the ENIAC to be the first digital computer because it was fully functional.
ENIAC

The first stored program computer

The early British computer known as the EDSAC is considered to be the first stored program electronic computer. The computer performed its first calculation on May 6, 1949 and was the computer that ran the first graphical computer game, nicknamed "Baby".
EDSAC      Manchester Mark 1
Around the same time, the Manchester Mark 1 was another computer that could run stored programs. Built at the Victoria University of Manchester, the first version of the Mark 1 computer became operational in April 1949.  Mark 1 was used to run a program to search for Mersenne primes for nine hours without error on June 16 and 17 that same year.

The first computer company

The first computer company was the Electronic Controls Company and was founded in 1949 by J. Presper Eckert and John Mauchly, the same individuals who helped create the ENIAC computer. The company was later renamed to EMCC or Eckert-Mauchly Computer Corporation and released a series of mainframe computers under the UNIVAC name.

First stored program computer

UNIVAC 1101 
First delivered to the United States government in 1950, the UNIVAC 1101 or ERA 1101 is considered to be the first computer that was capable of storing and running a program from memory.

First commercial computer

In 1942, Konrad Zuse begin working on the Z4 that later became the first commercial computer. The computer was sold to Eduard Stiefel, a mathematician of the Swiss Federal Institute of Technology Zurich on July 12, 1950.

IBM's first computer

On April 7, 1953 IBM publicly introduced the 701; its first commercial scientific computer.

The first computer with RAM

MIT introduces the Whirlwind machine on March 8, 1955, a revolutionary computer that was the first digital computer with magnetic core RAM and real-time graphics.
Whirlwind machine

The first transistor computer

Transistors 
The TX-O (Transistorized Experimental computer) is the first transistorized computer to be demonstrated at the Massachusetts Institute of Technology in 1956.

The first minicomputer

In 1960, Digital Equipment Corporation released its first of many PDP computers, the PDP-1.

The first desktop and mass-market computer

In 1964, the first desktop computer, the Programma 101, was unveiled to the public at the New York World's Fair. It was invented by Pier Giorgio Perotto and manufactured by Olivetti. About 44,000 Programma 101 computers were sold, each with a price tag of $3,200.
In 1968, Hewlett Packard began marketing the HP 9100A, considered to be the first mass-marketed desktop computer.

The first workstation

Although it was never sold, the first workstation is considered to be the Xerox Alto, introduced in 1974. The computer was revolutionary for its time and included a fully functional computer, display, and mouse. The computer operated like many computers today utilizing windows, menus and icons as an interface to its operating system. Many of the computer's capabilities were first demonstrated in The Mother of All Demos by Douglas Engelbart on December 9, 1968.

The first microprocessor

Intel introduces the first microprocessor, the Intel 4004 on November 15, 1971.

The first micro-computer

The Vietnamese-French engineer, André Truong Trong Thi, along with Francois Gernelle, developed the Micral computer in 1973. Considered as the first "micro-computer", it used the Intel 8008 processor and was the first commercial non-assembly computer. It originally sold for $1,750.

The first personal computer

In 1975, Ed Roberts coined the term "personal computer" when he introduced the Altair 8800. Although the first personal computer is considered by many to be the KENBAK-1, which was first introduced for $750 in 1971. The computer relied on a series of switches for inputting data and output data by turning on and off a series of lights.
Altair 8800 Computer

The first laptop or portable computer

IBM 5100 
The IBM 5100 is the first portable computer, which was released on September 1975. The computer weighed 55 pounds and had a five inch CRT display, tape drive, 1.9MHz PALM processor, and 64KB of RAM. In the picture is an ad of the IBM 5100 taken from a November 1975 issue of Scientific America.
The first truly portable computer or laptop is considered to be the Osborne I, which was released on April 1981 and developed by Adam Osborne. The Osborne I weighed 24.5 pounds, had a 5-inch display, 64 KB of memory, two 5 1/4" floppy drives, ran the CP/M 2.2 operating system, included a modem, and cost US$1,795.
The IBM PC Division (PCD) later released the IBM portable in 1984, it's first portable computer that weighed in at 30 pounds. Later in 1986, IBM PCD announced it's first laptop computer, the PC Convertible, weighing 12 pounds. Finally, in 1994, IBM introduced the IBM ThinkPad 775CD, the first notebook with an integrated CD-ROM.

The first Apple computer

The Apple I (Apple 1) was the first Apple computer that originally sold for $666.66. The computer kit was developed by Steve Wozniak in 1976 and contained a 6502 8-bit processor and 4 kb of memory, which was expandable to 8 or 48 kb using expansion cards. Although the Apple I had a fully assembled circuit board the kit still required a power supply, display, keyboard, and case to be operational. Below is a picture of an Apple I from an advertisement by Apple.
Apple I computer

The first IBM personal computer

IBM PC 5150
IBM introduced its first personal computer called the IBM PC in 1981. The computer was code named and still sometimes referred to as the Acorn and had a 8088 processor, 16 KB of memory, which was expandable to 256 and utilized MS-DOS.

The first PC clone

The Compaq Portable is considered to be the first PC clone and was release in March 1983 by Compaq. The Compaq Portable was 100% compatible with IBM computers and was capable of running any software developed for IBM computers.
  • See the below other computer companies first for other IBM compatible computers

The first multimedia computer

In 1992, Tandy Radio Shack became one of the first companies to release a computer based on the MPC standard with its introduction of the M2500 XL/2 and M4020 SX computers.

Other computer company firsts

Below is a listing of some of the major computers companies first computers.
Commodore - In 1977, Commodore introduced its first computer, the "Commodore PET".
Compaq - In March 1983, Compaq released its first computer and the first 100% IBM compatible computer, the "Compaq Portable."
Dell - In 1985, Dell introduced its first computer, the "Turbo PC."
Hewlett Packard - In 1966, Hewlett Packard released its first general computer, the "HP-2115."
NEC - In 1958, NEC builds its first computer, the "NEAC 1101."
Toshiba - In 1954, Toshiba introduces its first computer, the "TAC" digital computer. 





jeudi 21 janvier 2016

History of Robotic art1





Although the science of roboticsonly came about in the 20th century, the history of human-invented automation has a much lengthier past. In fact, the ancient Greek engineer Hero of Alexandria, produced two texts,Pneumatica andAutomata, that testify to the existence of hundreds of different kinds of “wonder” machines capable of automated movement. Of course, robotics in the 20th and 21st centuries has advanced radically to include machines capable of assembling other machines and even robots that can be mistaken for human beings.
The word robotics was inadvertently coined by science fiction author Isaac Asimov in his 1941 story “Liar!” Science fiction authors throughout history have been interested in man’s capability of producing self-motivating machines and lifeforms, from the ancient Greek myth of Pygmalion to Mary Shelley’s Dr. Frankenstein and Arthur C. Clarke’s HAL 9000. Essentially, a robot is a re-programmable machine that is capable of movement in the completion of a task. Robots use special coding that differentiates them from other machines and machine tools, such as CNC. Robots have found uses in a wide variety of industries due to their robust resistance capabilities and precision function.
 
Historical Robotics
Many sources attest to the popularity of automatons in ancient and Medieval times. Ancient Greeks and Romans developed simple automatons for use as tools, toys, and as part of religious ceremonies. Predating modern robots in industry, the Greek God Hephaestus was supposed to have built automatons to work for him in a workshop. Unfortunately, none of the early automatons are extant. 
In the Middle Ages, in both Europe and the Middle East, automatons were popular as part of clocks and religious worship. The Arab polymath Al-Jazari (1136-1206) left texts describing and illustrating his various mechanical devices, including a large elephant clock that moved and sounded at the hour, a musical robot band and a waitress automaton that served drinks. In Europe, there is an automaton monk extant that kisses the cross in its hands. Many other automata were created that showed moving animals and humanoid figures that operated on simple cam systems, but in the 18th century, automata were understood well enough and technology advanced to the point where much more complex pieces could be made. French engineer Jacques de Vaucanson is credited with creating the first successful biomechanical automaton, a human figure that plays a flute. Automata were so popular that they traveled Europe entertaining heads of state such as Frederick the Great and Napoleon Bonaparte.
 
Victorian Robots
 
The Industrial Revolution and the increased focus on mathematics, engineering and science in England in the Victorian age added to the momentum towards actual robotics. Charles Babbage (1791-1871) worked to develop the foundations of computer science in the early-to-mid nineteenth century, his most successful projects being the difference engine and the analytical engine. Although never completed due to lack of funds, these two machines laid out the basics for mechanical calculations. Others such as Ada Lovelace recognized the future possibility of computers creating images or playing music.
Automata continued to provide entertainment during the 19th century, but coterminous with this period was the development of steam-powered machines and engines that helped to make manufacturing much more efficient and quick. Factories began to employ machines to either increase work loads or precision in the production of many products. 

The Twentieth Century to Today

 In 1920, Karel Capek published his play R.U.R. (Rossum’s Universal Robots), which introduced the word “robot.” It was taken from an old Slavic word that meant something akin to “monotonous or forced labor.” However, it was thirty years before the first industrial robot went to work. In the 1950s, George Devol designed the Unimate, a robotic arm device that transported die castings in a General Motors plant in New Jersey, which started work in 1961. Unimation, the company Devol founded with robotic entrepreneur Joseph Engelberger, was the first robot manufacturing company. The robot was originally seen as a curiosity, to the extent that it even appeared on The Tonight Show in 1966. Soon, robotics began to develop into another tool in the industrial manufacturing arsenal.
 
Robotics became a burgeoning science and more money was invested. Robots spread to Japan, South Korea and many parts of Europe over the last half century, to the extent that projections for the 2011 population of industrial robots are around 1.2 million. Additionally, robots have found a place in other spheres, as toys and entertainment, military weapons, search and rescue assistants, and many other jobs. Essentially, as programming and technology improve, robots find their way into many jobs that in the past have been too dangerous, dull or impossible for humans to achieve. Indeed, robots are being launched into space to complete the next stages of extraterrestrial and extrasolar research.

What is a programming computer







Aren't Programmers Just Nerds?:
Programming is a creative process done by programmers to instruct a computer on how to do a task. Hollywood has helped instill an image of programmers as uber techies who can sit down at a computer and break any password in seconds or make highly tuned warp engines improve performance by 500% with just one tweak. Sadly the reality is far less interesting!
  • Defiunition of Program
  • What is a Programming Language?
  • What is Software?
So Programming Is Boring? No!:
Computers can be programmed to do interesting things. In the UK, a system has been running for several years that reads car number plates. The car is seen by a camera and the image captured then instantly processed so that the number plate details are extracted, run through a national car registration database of number plates and any stolen vehicle etc alerts for that vehicle flagged up within four seconds.
With the right attachments, a computer could be programmed to perform dentistry. Testing that would be interesting and might be a bit scary!
Two Different Types Of Software:
Older computers, generally those with black and white displays and no mouse tend to run consoleapplications. There are still plenty of these about, they are very popular for rapid data entry.
The other type of applications require a mouse and are called GUI programs or event driven programming. These are seen on Windows PCs, Linux PCs and Apple Macs. Programming these applications is a bit harder than for console but newer programming languages like these have simplified it.
  • Visual Basic
  • Delphi
  • C#
What Do Programs Do?:
Fundamentally programs manipulate numbers and text. These are the building blocks of all programs. Programming languages let you use them in different ways, eg adding numbers, etc, or storing data on disk for later retrieval.
These numbers and text are called variables and can be handled singly or in structured collections. In C++, a variable can be used to count numbers, or a struct) variable hold payroll details for an employee such as
  • Name
  • Salary
  • Company Id Number
  • Total Tax Paid
  • SSN
A database can hold millions of these records and fetch them very rapidly.
Programs Must Be Written For An Operating System:
Programs don't exist by themselves but need operating system, unless they are the operating system!
Win 32
Linux
Mac
Before Java, programs needed rewriting for each operating system. A program that ran on a Linux box could not run on a Windows box or a Mac. With Java it is now far easier to write a program once then run it everywhere as it is compiled to a common code calledbytecode which is then interpreted. Each operating system has a Java interpreter, called a Java Virtual Machine (JVM) written for it and knows how to interpret bytecode. C# has something similar.
Programs Use Operating Systems Code:
Unless you're selling software and want to run it on every different operating system, you are more likely to need to modify it for new versions of the same operating system. Programs use features provided by the operating system and if those change then the program must change or it will break.
Many applications written for Windows 2000 or XP use the Local Machine part of the registry. Under Windows Vista this will cause problems and Microsoft is advising people to rewrite code affected by this. Microsoft have done this to make Vista more secure.
Computers Can Talk To Other Computers:
When connected in a network, they can even run programs on each other or transfer data via ports. Programs you write can also do this. This makes programming a little harder as you have to cope with situations like
  • When a network cable is pulled out.
  • Another networked PC is switched off.
Some advanced programming languages let you write programs that run their parts on different computers. This only works if the problem can use parallelism. Some problems cannot be divided this way:
  • Nine women cannot produce one child between them in just one month!
Programming Peripherals attached to your Computer:
If you have a peripheral, say a computer controlled video camera, it will come with a cable that hooks it up to the PC and some interfacing software to control it. It may also come with
  • API
  • SDK
that lets you write software to control it. You could then program it to switch on and record during the hours when you are out of the house. If your PC can read sound levels from the microphone then you might write code that starts the camera recording when the sound level is above a limit that you specified. Many peripherals can be programmed like this.
Games Are Just Programs:
Games on PCs use special libraries :
  • DirectX
  • XNA
  • SDL
So they can write to the display hardware very rapidly. Games screens update at over 60 times per seconds so 3D games software has to move everything in 3D space, detect collisions etc then render the 3D view onto a flat surface (the screen!) 60 times each second. That's a very short period of time but video card hardware now does an increasing amount of the rendering work. The GPU chips are optimized for fast rendering and can do these operations up to 10x faster than a CPU can, even with the fastest software.
Conclusion:
Many programmers write software as a creative outlet. The web is full of websites with source codedeveloped by amateur programmers who did it for the heck of it and are happy to share their code. Linuxstarted this way when Linus Torvalds shared code that he had written.
The intellectual effort in writing a medium sized program is probably comparable to writing a book, except you never need to debug a book! There is a joy to finding out new ways to make something happen, or solving a particularly thorny problem. If your programming skills are good enough then you could get a full-time job as a programmer.


 https://www.blogger.com/manage-blogs-following.g

What is TCP/IP and How Does It Make the Internet Work?

TCP/IP – A Brief Explanation
the Internet works by using a protocol called TCP/IP, or Transmission Control Protocol/Internet Protocol. TCP/IP is the underlying communication language of the Internet. In base terms, TCP/IP allows one computer to talk to another computer via the Internet through compiling packets of data and sending them to right location.For those who don’t know, a packet, sometimes more formally referred to as a network packet, is a unit of data transmitted from one location to another. Much like the atom is the smallest unit of a cell, a packet is the smallest unit of transmitted information over the Internet.
Defining TCP
As indicated in the name, there are two layers to TCP/IP. The top layer, TCP, is responsible for taking large amounts of data, compiling it into packets and sending them on their way to be received by a fellow TCP layer, which turns the packets into useful information/data.
Defining IP
The bottom layer, IP, is the locational aspect of the pair allowing the packets of information to be sent and received to the correct location. If you think about IP in terms of a map, the IP layer serves as the packet GPS to find the correct destination. Much like a car driving on a highway, each packet passes through a gateway computer (signs on the road), which serve to forward the packets to the right destination.
In summary, TCP is the data. IP is the Internet location GPS.
That is how the Internet works on the surface. Let’s take a look below the surface at the abstraction layers of the Internet.
The Four Abstraction Layers Embedded in TCP/IP
The four abstraction layers are the link layer (lowest layer), the Internet layer, the transport layer and the application layer (top layer).
They work in the following fashion:
  1. The Link Layer is the physical network equipment used to interconnect nodes and servers.
  2. The Internet Layer connects hosts to one another across networks.
  3. The Transport Layer resolves all host-to-host communication.
  4. The Application Layer is utilized to ensure communication between applications on a network.
In English, the four abstraction layers embedded in TCP/IP allow packets of data, application programs and physical network equipment to communicate with one another over the Internet to ensure packets are sent intact and to the correct location.
The Four Abstraction Layers Embedded in TCP/IP
Now that you know the base definition of TCP/IP and how the Internet works, we need to discuss why all of this matters.
The Internet is About Communication and Access
The common joke about the Internet is it is a series of tubes where data is sent and received at different locations. The analogy isn’t bad. However, it isn’t complete.
The Internet is more like a series of tubes with various connection points, various transmission points, various send/receive points, various working speeds and a governing body watching over the entire process.
To understand why TCP/IP is needed, here’s a quick example.
I live in Gainesville, Florida. However, because I once lived in Auckland, New Zealand, for an extended period of time, I enjoy checking the local New Zealand news on a weekly basis.
To do this, I read The New Zealand Herald. To do this, I visit www.nzhearald.co.nz. As you might have guessed from the URL, The New Zealand Herald is digitally based in New Zealand (i.e. the other side of the world from Gainesville).
The Amount of Hops For Packets to Be Transmitted
For the connection to be made from my computer located in Gainesville to a server hosting The New Zealand Herald based in New Zealand, packets of data have to be sent to multiple data centers through multiple gateways and through multiple verification channels to ensure my request finds the right destination.
The common Internet parlance for this is finding out how many hops it takes for one packet of information to be sent to another location.
Running a trace route can show you the amount of hops along the way. If you are wondering, there are 17 hops between my location in Gainesville and the server hosting the The New Zealand Herald website.
TCP/IP is needed to ensure that information reaches its intended destination. Without TCP/IP, packets of information would never arrive where they need to be and the Internet wouldn’t be the pool of useful information that we know it to be today.

dimanche 17 janvier 2016

How the binary numeric system works

 
 
 
 
Learning how the binary numeric system works may seem like an overwhelming task, but the system itself is actually relatively easy.
The Basic Concepts of Binary Numeric Systems and Codes: 
The traditional numeric system is based on ten characters. Each one can be repeated however many times is necessarily in order to express a certain quantity or value. Binary numbers work on basically the same principle, but instead of ten characters they make use of only two. The characters of “1” and “0” can be combined to express all the same values as their more traditional counterparts.
With only two characters in use, the combination of them can seem a bit more awkward than a conventional numeric system. With each character only able to represent a basic “on” or “off” in the position that it occupies, they can still be combined, just like conventional numbers that hold a certain place within a numeric expression, in such a way that they will represent any number that is needed to complete an expression, sequence or equation.
  
Electronic Memory Storage and Binary Numbers:

Electronic data storage, like that used in computers or similar devices, operates based on minute electrical and magnetic charges. The challenge of converting this principle into a workable way to express numbers reveals the advantage offered by a numeric system based on the simple concept of “on” or “off”. Each individual character is called a bit, and will be either a “1” or a “0” depending on the presence or absence of an electromagnetic charge.

While unwieldy for use with any system other than a computational device capable of reading and making use of the numbers at terrific speeds, this system is ideal for electronic and computational devices. Used in far more than just your personal computer, the binary numeric system is at the heart of any number of electronic devices that possesses even a simplistic degree of sophistication. Learning more about this system and its uses can hold plenty of advantages for programmers, students of mathematics and anyone with a keen interest to learn more about the world around them.

 

Binary Numeric System Uses:

The first computers were analog machines that did not need electricity to function. Even so, they were able to make effective use of the earliest practical examples of the binary numeric system. The addition of electricity to their capacities and the use of primitive components like vacuum tubes allowed for the earliest generation of computers to advance rapidly in terms of applications and performance.

What is binary code, the history behind it and popular uses


  
  

            All computer language is based in binary code. It is the back end of all computer functioning. Binary numbers means that there is a code of either 0 or 1 for a computer to toggle between. All computer functions will rapidly toggle between 00 or 01 at an incomprehensible speed. This is how computers have come to assist humans in tasks that would take so much longer to complete. The human brain functions holistically at much more rapid speeds than a computer in doing other types of very complicated tasks, such as reasoning and analytical thought processes.The code in a computer language, with regard to text that a central processing unit or CPU of a computer will read, is based in ASCII strings that are standardized with strings of zeros and ones that represent each letter of the alphabet or numbers. ASCII stands for American Standard Code Information Interchange, which is a standard of 7 bit binary codes that will translate into computer logic to represent text, letters and symbols that humans will recognize. There are from 0 to 127 numbers or letters represented in the ASCII system.

              Each binary string has eight binary bits that will look like a bunch of zeros and ones in a certain pattern unique for each letter of a word. With this type of code, 256 different possible values can be represented for the large group of symbols, letters and operating instructions that can be given to the mainframe. From these codes are derived character strings and then bit strings. Bit strings can represent decimal numbers.

           The binary numbers can be found in the great Vedic literatures, the shastras, written in the first language of mankind, Sanskrit, more specifically located in the ChandahSutra and originally committed to text by Pingala around the 4th Century. This is an estimation, as Sanskrit was a language that was only sung many years before mankind had a need to write on paper. Before the need to write on paper, mankind had highly developed memory and so the need to write was not even part of life at that time.

        Counterintuitively or surprisingly, in more modern historical documents it is noted that mankind has progressed beyond Sanskrit. There were no written texts as important information was recited verbally. There were no textbooks prior to the creation of binary code, as they were not required. According to the Shastras, mankind became less fortunate and the memory began to decline, requiring texts and books to be created for keeping track of important information. Once this was a necessity, the binary code was first traced to these great texts and then long after that, around the 17th century, the great philosopher and father of Calculus, Gottfried Leibniz derived a system of logic for verbal statements that would be completely represented in a mathematical code. He was theorizing that life could be reduced to simple codes of rows of combinations of zeros and ones. Not actually knowing what this system would be used for, eventually, with the help of George Boole, Boolean logic was developed, using the on/off system of zeros and ones for basic algebraic operations. The on or off codes can rapidly be implemented by computers for doing seemingly unlimited numbers of applications. All computer language is based in the binary system of logic.

What is Nanotechnology?




The scientific field of nanotechnology is still evolving, and there doesn’t seem to be one definition that everybody agrees on. It is known that nano deals with matter on a very small scale: larger than atoms but smaller than a breadcrumb. It is also known that matter at the nano scale can behave differently than bulk matter. Beyond that, individuals and groups focus on different aspects of nanotechnology.
Here are a few definitions of nanotechnology for your consideration.
The following definition is probably the most barebones and generally agreed upon:
Nanotechnology is the study and use of structures between 1 nanometer (nm) and 100 nanometers in size. To put these measurements in perspective, you would have to stack 1 billion nanometer-sized particles on top of each other to reach the height of a 1-meter-high (about 3-feet 3-inches-high) hall table. Another popular comparison is that you can fit about 80,000 nanometers in the width of a single human hair.
The next definition is from the Foresight Institute and adds a mention of the various fields of science that come into play with nanotechnology:
Structures, devices, and systems having novel properties and functions due to the arrangement of their atoms on the 1 to 100 nanometer scale. Many fields of endeavor contribute to nanotechnology, including molecular physics, materials science, chemistry, biology, computer science, electrical engineering, and mechanical engineering.
  The European Commission offers the following definition, which both repeats the fact mentioned in the previous definition that materials at the nanoscale have novel properties, and positions nano vis-à-vis its potential in the economic marketplace:
Nanotechnology is the study of phenomena and fine-tuning of materials at atomic, molecular and macromolecular scales, where properties differ significantly from those at a larger scale. Products based on nanotechnology are already in use and analysts expect markets to grow by hundreds of billions of euros during this decade.
  This next definition from the National Nanotechnology Initiative adds the fact that nanotechnology involves certain activities, such as measuring and manipulating nanoscale matter:
 Nanotechnology is the understanding and control of matter at dimensions between approximately 1 and 100 nanometers, where unique phenomena enable novel applications. Encompassing nanoscale science, engineering, and technology, nanotechnology involves imaging, measuring, modeling, and manipulating matter at this length scale
 The last definition is from Thomas Theis, director of physical sciences at the IBM Watson Research Center. It offers a broader and interesting perspective of the role and value of nanotechnology in our world:
[Nanotechnology is] an upcoming economic, business, and social phenomenon. Nano-advocates argue it will revolutionize the way we live, work and communicate.



During the Middle Ages, philosophers attempted to transmute base materials into gold in a process called alchemy. While their efforts proved fruitless, the pseudoscience alchemy paved the way to the real science of chemistry. Through chemistry, we learned more about the world around us, including the fact that all matter is composed of atoms. The types of atoms and the way those atoms join together determines a substance's properties.

Nanotechnology is a multidisciplinary science that looks at how we can manipulate matter at the molecular and atomic level. To do this, we must work on the nanoscale -- a scale so small that we can't see it with a light microscope. In fact, one nanometer is just one-billionth of a meter in size. Atoms are smaller still. It's difficult to quantify an atom's size -- they don't tend to hold a particular shape. But in general, a typical atom is about one-tenth of a nanometer in diameter.

Bubble-pen lithography allows researchers to create nanodevices



 
 Researchers at the Cockrell School of Engineering at The University of Texas at Austin have developed a device and technique called bubble-pen lithography, which can handle nanoparticles, tiny pieces of gold, silicon and other materials used in nanomanufacturing, without damaging them. The method uses microbubbles to inscribe nanoparticles onto a surface.
Using microbubbles, the technique allows researchers to quickly, gently and precisely handle the tiny particles to more easily build tiny machines, biomedical sensors, optical computers, solar panels and other devices. This advanced control is key to harnessing the properties of the nanoparticles.
Using their bubble-pen device, the researchers focus a laser underneath a sheet of gold nanoislands to generate a hotspot that creates a microbubble out of vaporised water. The bubble attracts and captures a nanoparticle through a combination of gas pressure, thermal and surface tension, surface adhesion and convection. The laser then steers the microbubble to move the nanoparticle to a site on the surface. When the laser is turned off, the microbubble disappears, leaving the particle on the surface. If necessary, the researchers can expand or reduce the size of the microbubble by increasing or decreasing the laser beam's power.
"The ability to control a single nanoparticle and fix it to a substrate without damaging it could open up great opportunities for the creation of new materials and devices," assistant professor, Yuebing Zheng said. "The capability of arranging the particles will help to advance a class of materials, known as metamaterials, with properties and functions that do not exist in current natural materials."
According to Prof Zheng, bubble-pen lithography can leverage a design software program in the same way as a 3D printer, so it can deposit nanoparticles in real time in a pre-programmed pattern or design. The researchers were able to write the UT Austin Longhorn symbol and create a dome shape out of nanoparticle beads.
In comparison to other existing lithography methods, bubble-pen lithography has several advantages, Prof Zheng says. First, the technique can be used to test prototypes and ideas for devices and materials more quickly. Second, the technique has the potential for large-scale, low-cost manufacturing of nanomaterials and devices. Other lithography techniques require more resources and a clean room environment.
Prof Zheng hopes to advance bubble-pen lithography by developing a multiple-beam processing technique for industrial-level production of nanomaterials and nanodevices. He is also planning to develop a portable version of the technique that works like a mobile phone for use in prototyping.

Author
Tom Austin-Morgan