ads

jeudi 21 janvier 2016

History of Robotic art1





Although the science of roboticsonly came about in the 20th century, the history of human-invented automation has a much lengthier past. In fact, the ancient Greek engineer Hero of Alexandria, produced two texts,Pneumatica andAutomata, that testify to the existence of hundreds of different kinds of “wonder” machines capable of automated movement. Of course, robotics in the 20th and 21st centuries has advanced radically to include machines capable of assembling other machines and even robots that can be mistaken for human beings.
The word robotics was inadvertently coined by science fiction author Isaac Asimov in his 1941 story “Liar!” Science fiction authors throughout history have been interested in man’s capability of producing self-motivating machines and lifeforms, from the ancient Greek myth of Pygmalion to Mary Shelley’s Dr. Frankenstein and Arthur C. Clarke’s HAL 9000. Essentially, a robot is a re-programmable machine that is capable of movement in the completion of a task. Robots use special coding that differentiates them from other machines and machine tools, such as CNC. Robots have found uses in a wide variety of industries due to their robust resistance capabilities and precision function.
 
Historical Robotics
Many sources attest to the popularity of automatons in ancient and Medieval times. Ancient Greeks and Romans developed simple automatons for use as tools, toys, and as part of religious ceremonies. Predating modern robots in industry, the Greek God Hephaestus was supposed to have built automatons to work for him in a workshop. Unfortunately, none of the early automatons are extant. 
In the Middle Ages, in both Europe and the Middle East, automatons were popular as part of clocks and religious worship. The Arab polymath Al-Jazari (1136-1206) left texts describing and illustrating his various mechanical devices, including a large elephant clock that moved and sounded at the hour, a musical robot band and a waitress automaton that served drinks. In Europe, there is an automaton monk extant that kisses the cross in its hands. Many other automata were created that showed moving animals and humanoid figures that operated on simple cam systems, but in the 18th century, automata were understood well enough and technology advanced to the point where much more complex pieces could be made. French engineer Jacques de Vaucanson is credited with creating the first successful biomechanical automaton, a human figure that plays a flute. Automata were so popular that they traveled Europe entertaining heads of state such as Frederick the Great and Napoleon Bonaparte.
 
Victorian Robots
 
The Industrial Revolution and the increased focus on mathematics, engineering and science in England in the Victorian age added to the momentum towards actual robotics. Charles Babbage (1791-1871) worked to develop the foundations of computer science in the early-to-mid nineteenth century, his most successful projects being the difference engine and the analytical engine. Although never completed due to lack of funds, these two machines laid out the basics for mechanical calculations. Others such as Ada Lovelace recognized the future possibility of computers creating images or playing music.
Automata continued to provide entertainment during the 19th century, but coterminous with this period was the development of steam-powered machines and engines that helped to make manufacturing much more efficient and quick. Factories began to employ machines to either increase work loads or precision in the production of many products. 

The Twentieth Century to Today

 In 1920, Karel Capek published his play R.U.R. (Rossum’s Universal Robots), which introduced the word “robot.” It was taken from an old Slavic word that meant something akin to “monotonous or forced labor.” However, it was thirty years before the first industrial robot went to work. In the 1950s, George Devol designed the Unimate, a robotic arm device that transported die castings in a General Motors plant in New Jersey, which started work in 1961. Unimation, the company Devol founded with robotic entrepreneur Joseph Engelberger, was the first robot manufacturing company. The robot was originally seen as a curiosity, to the extent that it even appeared on The Tonight Show in 1966. Soon, robotics began to develop into another tool in the industrial manufacturing arsenal.
 
Robotics became a burgeoning science and more money was invested. Robots spread to Japan, South Korea and many parts of Europe over the last half century, to the extent that projections for the 2011 population of industrial robots are around 1.2 million. Additionally, robots have found a place in other spheres, as toys and entertainment, military weapons, search and rescue assistants, and many other jobs. Essentially, as programming and technology improve, robots find their way into many jobs that in the past have been too dangerous, dull or impossible for humans to achieve. Indeed, robots are being launched into space to complete the next stages of extraterrestrial and extrasolar research.

What is a programming computer







Aren't Programmers Just Nerds?:
Programming is a creative process done by programmers to instruct a computer on how to do a task. Hollywood has helped instill an image of programmers as uber techies who can sit down at a computer and break any password in seconds or make highly tuned warp engines improve performance by 500% with just one tweak. Sadly the reality is far less interesting!
  • Defiunition of Program
  • What is a Programming Language?
  • What is Software?
So Programming Is Boring? No!:
Computers can be programmed to do interesting things. In the UK, a system has been running for several years that reads car number plates. The car is seen by a camera and the image captured then instantly processed so that the number plate details are extracted, run through a national car registration database of number plates and any stolen vehicle etc alerts for that vehicle flagged up within four seconds.
With the right attachments, a computer could be programmed to perform dentistry. Testing that would be interesting and might be a bit scary!
Two Different Types Of Software:
Older computers, generally those with black and white displays and no mouse tend to run consoleapplications. There are still plenty of these about, they are very popular for rapid data entry.
The other type of applications require a mouse and are called GUI programs or event driven programming. These are seen on Windows PCs, Linux PCs and Apple Macs. Programming these applications is a bit harder than for console but newer programming languages like these have simplified it.
  • Visual Basic
  • Delphi
  • C#
What Do Programs Do?:
Fundamentally programs manipulate numbers and text. These are the building blocks of all programs. Programming languages let you use them in different ways, eg adding numbers, etc, or storing data on disk for later retrieval.
These numbers and text are called variables and can be handled singly or in structured collections. In C++, a variable can be used to count numbers, or a struct) variable hold payroll details for an employee such as
  • Name
  • Salary
  • Company Id Number
  • Total Tax Paid
  • SSN
A database can hold millions of these records and fetch them very rapidly.
Programs Must Be Written For An Operating System:
Programs don't exist by themselves but need operating system, unless they are the operating system!
Win 32
Linux
Mac
Before Java, programs needed rewriting for each operating system. A program that ran on a Linux box could not run on a Windows box or a Mac. With Java it is now far easier to write a program once then run it everywhere as it is compiled to a common code calledbytecode which is then interpreted. Each operating system has a Java interpreter, called a Java Virtual Machine (JVM) written for it and knows how to interpret bytecode. C# has something similar.
Programs Use Operating Systems Code:
Unless you're selling software and want to run it on every different operating system, you are more likely to need to modify it for new versions of the same operating system. Programs use features provided by the operating system and if those change then the program must change or it will break.
Many applications written for Windows 2000 or XP use the Local Machine part of the registry. Under Windows Vista this will cause problems and Microsoft is advising people to rewrite code affected by this. Microsoft have done this to make Vista more secure.
Computers Can Talk To Other Computers:
When connected in a network, they can even run programs on each other or transfer data via ports. Programs you write can also do this. This makes programming a little harder as you have to cope with situations like
  • When a network cable is pulled out.
  • Another networked PC is switched off.
Some advanced programming languages let you write programs that run their parts on different computers. This only works if the problem can use parallelism. Some problems cannot be divided this way:
  • Nine women cannot produce one child between them in just one month!
Programming Peripherals attached to your Computer:
If you have a peripheral, say a computer controlled video camera, it will come with a cable that hooks it up to the PC and some interfacing software to control it. It may also come with
  • API
  • SDK
that lets you write software to control it. You could then program it to switch on and record during the hours when you are out of the house. If your PC can read sound levels from the microphone then you might write code that starts the camera recording when the sound level is above a limit that you specified. Many peripherals can be programmed like this.
Games Are Just Programs:
Games on PCs use special libraries :
  • DirectX
  • XNA
  • SDL
So they can write to the display hardware very rapidly. Games screens update at over 60 times per seconds so 3D games software has to move everything in 3D space, detect collisions etc then render the 3D view onto a flat surface (the screen!) 60 times each second. That's a very short period of time but video card hardware now does an increasing amount of the rendering work. The GPU chips are optimized for fast rendering and can do these operations up to 10x faster than a CPU can, even with the fastest software.
Conclusion:
Many programmers write software as a creative outlet. The web is full of websites with source codedeveloped by amateur programmers who did it for the heck of it and are happy to share their code. Linuxstarted this way when Linus Torvalds shared code that he had written.
The intellectual effort in writing a medium sized program is probably comparable to writing a book, except you never need to debug a book! There is a joy to finding out new ways to make something happen, or solving a particularly thorny problem. If your programming skills are good enough then you could get a full-time job as a programmer.


 https://www.blogger.com/manage-blogs-following.g

What is TCP/IP and How Does It Make the Internet Work?

TCP/IP – A Brief Explanation
the Internet works by using a protocol called TCP/IP, or Transmission Control Protocol/Internet Protocol. TCP/IP is the underlying communication language of the Internet. In base terms, TCP/IP allows one computer to talk to another computer via the Internet through compiling packets of data and sending them to right location.For those who don’t know, a packet, sometimes more formally referred to as a network packet, is a unit of data transmitted from one location to another. Much like the atom is the smallest unit of a cell, a packet is the smallest unit of transmitted information over the Internet.
Defining TCP
As indicated in the name, there are two layers to TCP/IP. The top layer, TCP, is responsible for taking large amounts of data, compiling it into packets and sending them on their way to be received by a fellow TCP layer, which turns the packets into useful information/data.
Defining IP
The bottom layer, IP, is the locational aspect of the pair allowing the packets of information to be sent and received to the correct location. If you think about IP in terms of a map, the IP layer serves as the packet GPS to find the correct destination. Much like a car driving on a highway, each packet passes through a gateway computer (signs on the road), which serve to forward the packets to the right destination.
In summary, TCP is the data. IP is the Internet location GPS.
That is how the Internet works on the surface. Let’s take a look below the surface at the abstraction layers of the Internet.
The Four Abstraction Layers Embedded in TCP/IP
The four abstraction layers are the link layer (lowest layer), the Internet layer, the transport layer and the application layer (top layer).
They work in the following fashion:
  1. The Link Layer is the physical network equipment used to interconnect nodes and servers.
  2. The Internet Layer connects hosts to one another across networks.
  3. The Transport Layer resolves all host-to-host communication.
  4. The Application Layer is utilized to ensure communication between applications on a network.
In English, the four abstraction layers embedded in TCP/IP allow packets of data, application programs and physical network equipment to communicate with one another over the Internet to ensure packets are sent intact and to the correct location.
The Four Abstraction Layers Embedded in TCP/IP
Now that you know the base definition of TCP/IP and how the Internet works, we need to discuss why all of this matters.
The Internet is About Communication and Access
The common joke about the Internet is it is a series of tubes where data is sent and received at different locations. The analogy isn’t bad. However, it isn’t complete.
The Internet is more like a series of tubes with various connection points, various transmission points, various send/receive points, various working speeds and a governing body watching over the entire process.
To understand why TCP/IP is needed, here’s a quick example.
I live in Gainesville, Florida. However, because I once lived in Auckland, New Zealand, for an extended period of time, I enjoy checking the local New Zealand news on a weekly basis.
To do this, I read The New Zealand Herald. To do this, I visit www.nzhearald.co.nz. As you might have guessed from the URL, The New Zealand Herald is digitally based in New Zealand (i.e. the other side of the world from Gainesville).
The Amount of Hops For Packets to Be Transmitted
For the connection to be made from my computer located in Gainesville to a server hosting The New Zealand Herald based in New Zealand, packets of data have to be sent to multiple data centers through multiple gateways and through multiple verification channels to ensure my request finds the right destination.
The common Internet parlance for this is finding out how many hops it takes for one packet of information to be sent to another location.
Running a trace route can show you the amount of hops along the way. If you are wondering, there are 17 hops between my location in Gainesville and the server hosting the The New Zealand Herald website.
TCP/IP is needed to ensure that information reaches its intended destination. Without TCP/IP, packets of information would never arrive where they need to be and the Internet wouldn’t be the pool of useful information that we know it to be today.

dimanche 17 janvier 2016

How the binary numeric system works

 
 
 
 
Learning how the binary numeric system works may seem like an overwhelming task, but the system itself is actually relatively easy.
The Basic Concepts of Binary Numeric Systems and Codes: 
The traditional numeric system is based on ten characters. Each one can be repeated however many times is necessarily in order to express a certain quantity or value. Binary numbers work on basically the same principle, but instead of ten characters they make use of only two. The characters of “1” and “0” can be combined to express all the same values as their more traditional counterparts.
With only two characters in use, the combination of them can seem a bit more awkward than a conventional numeric system. With each character only able to represent a basic “on” or “off” in the position that it occupies, they can still be combined, just like conventional numbers that hold a certain place within a numeric expression, in such a way that they will represent any number that is needed to complete an expression, sequence or equation.
  
Electronic Memory Storage and Binary Numbers:

Electronic data storage, like that used in computers or similar devices, operates based on minute electrical and magnetic charges. The challenge of converting this principle into a workable way to express numbers reveals the advantage offered by a numeric system based on the simple concept of “on” or “off”. Each individual character is called a bit, and will be either a “1” or a “0” depending on the presence or absence of an electromagnetic charge.

While unwieldy for use with any system other than a computational device capable of reading and making use of the numbers at terrific speeds, this system is ideal for electronic and computational devices. Used in far more than just your personal computer, the binary numeric system is at the heart of any number of electronic devices that possesses even a simplistic degree of sophistication. Learning more about this system and its uses can hold plenty of advantages for programmers, students of mathematics and anyone with a keen interest to learn more about the world around them.

 

Binary Numeric System Uses:

The first computers were analog machines that did not need electricity to function. Even so, they were able to make effective use of the earliest practical examples of the binary numeric system. The addition of electricity to their capacities and the use of primitive components like vacuum tubes allowed for the earliest generation of computers to advance rapidly in terms of applications and performance.

What is binary code, the history behind it and popular uses


  
  

            All computer language is based in binary code. It is the back end of all computer functioning. Binary numbers means that there is a code of either 0 or 1 for a computer to toggle between. All computer functions will rapidly toggle between 00 or 01 at an incomprehensible speed. This is how computers have come to assist humans in tasks that would take so much longer to complete. The human brain functions holistically at much more rapid speeds than a computer in doing other types of very complicated tasks, such as reasoning and analytical thought processes.The code in a computer language, with regard to text that a central processing unit or CPU of a computer will read, is based in ASCII strings that are standardized with strings of zeros and ones that represent each letter of the alphabet or numbers. ASCII stands for American Standard Code Information Interchange, which is a standard of 7 bit binary codes that will translate into computer logic to represent text, letters and symbols that humans will recognize. There are from 0 to 127 numbers or letters represented in the ASCII system.

              Each binary string has eight binary bits that will look like a bunch of zeros and ones in a certain pattern unique for each letter of a word. With this type of code, 256 different possible values can be represented for the large group of symbols, letters and operating instructions that can be given to the mainframe. From these codes are derived character strings and then bit strings. Bit strings can represent decimal numbers.

           The binary numbers can be found in the great Vedic literatures, the shastras, written in the first language of mankind, Sanskrit, more specifically located in the ChandahSutra and originally committed to text by Pingala around the 4th Century. This is an estimation, as Sanskrit was a language that was only sung many years before mankind had a need to write on paper. Before the need to write on paper, mankind had highly developed memory and so the need to write was not even part of life at that time.

        Counterintuitively or surprisingly, in more modern historical documents it is noted that mankind has progressed beyond Sanskrit. There were no written texts as important information was recited verbally. There were no textbooks prior to the creation of binary code, as they were not required. According to the Shastras, mankind became less fortunate and the memory began to decline, requiring texts and books to be created for keeping track of important information. Once this was a necessity, the binary code was first traced to these great texts and then long after that, around the 17th century, the great philosopher and father of Calculus, Gottfried Leibniz derived a system of logic for verbal statements that would be completely represented in a mathematical code. He was theorizing that life could be reduced to simple codes of rows of combinations of zeros and ones. Not actually knowing what this system would be used for, eventually, with the help of George Boole, Boolean logic was developed, using the on/off system of zeros and ones for basic algebraic operations. The on or off codes can rapidly be implemented by computers for doing seemingly unlimited numbers of applications. All computer language is based in the binary system of logic.

What is Nanotechnology?




The scientific field of nanotechnology is still evolving, and there doesn’t seem to be one definition that everybody agrees on. It is known that nano deals with matter on a very small scale: larger than atoms but smaller than a breadcrumb. It is also known that matter at the nano scale can behave differently than bulk matter. Beyond that, individuals and groups focus on different aspects of nanotechnology.
Here are a few definitions of nanotechnology for your consideration.
The following definition is probably the most barebones and generally agreed upon:
Nanotechnology is the study and use of structures between 1 nanometer (nm) and 100 nanometers in size. To put these measurements in perspective, you would have to stack 1 billion nanometer-sized particles on top of each other to reach the height of a 1-meter-high (about 3-feet 3-inches-high) hall table. Another popular comparison is that you can fit about 80,000 nanometers in the width of a single human hair.
The next definition is from the Foresight Institute and adds a mention of the various fields of science that come into play with nanotechnology:
Structures, devices, and systems having novel properties and functions due to the arrangement of their atoms on the 1 to 100 nanometer scale. Many fields of endeavor contribute to nanotechnology, including molecular physics, materials science, chemistry, biology, computer science, electrical engineering, and mechanical engineering.
  The European Commission offers the following definition, which both repeats the fact mentioned in the previous definition that materials at the nanoscale have novel properties, and positions nano vis-à-vis its potential in the economic marketplace:
Nanotechnology is the study of phenomena and fine-tuning of materials at atomic, molecular and macromolecular scales, where properties differ significantly from those at a larger scale. Products based on nanotechnology are already in use and analysts expect markets to grow by hundreds of billions of euros during this decade.
  This next definition from the National Nanotechnology Initiative adds the fact that nanotechnology involves certain activities, such as measuring and manipulating nanoscale matter:
 Nanotechnology is the understanding and control of matter at dimensions between approximately 1 and 100 nanometers, where unique phenomena enable novel applications. Encompassing nanoscale science, engineering, and technology, nanotechnology involves imaging, measuring, modeling, and manipulating matter at this length scale
 The last definition is from Thomas Theis, director of physical sciences at the IBM Watson Research Center. It offers a broader and interesting perspective of the role and value of nanotechnology in our world:
[Nanotechnology is] an upcoming economic, business, and social phenomenon. Nano-advocates argue it will revolutionize the way we live, work and communicate.



During the Middle Ages, philosophers attempted to transmute base materials into gold in a process called alchemy. While their efforts proved fruitless, the pseudoscience alchemy paved the way to the real science of chemistry. Through chemistry, we learned more about the world around us, including the fact that all matter is composed of atoms. The types of atoms and the way those atoms join together determines a substance's properties.

Nanotechnology is a multidisciplinary science that looks at how we can manipulate matter at the molecular and atomic level. To do this, we must work on the nanoscale -- a scale so small that we can't see it with a light microscope. In fact, one nanometer is just one-billionth of a meter in size. Atoms are smaller still. It's difficult to quantify an atom's size -- they don't tend to hold a particular shape. But in general, a typical atom is about one-tenth of a nanometer in diameter.

Bubble-pen lithography allows researchers to create nanodevices



 
 Researchers at the Cockrell School of Engineering at The University of Texas at Austin have developed a device and technique called bubble-pen lithography, which can handle nanoparticles, tiny pieces of gold, silicon and other materials used in nanomanufacturing, without damaging them. The method uses microbubbles to inscribe nanoparticles onto a surface.
Using microbubbles, the technique allows researchers to quickly, gently and precisely handle the tiny particles to more easily build tiny machines, biomedical sensors, optical computers, solar panels and other devices. This advanced control is key to harnessing the properties of the nanoparticles.
Using their bubble-pen device, the researchers focus a laser underneath a sheet of gold nanoislands to generate a hotspot that creates a microbubble out of vaporised water. The bubble attracts and captures a nanoparticle through a combination of gas pressure, thermal and surface tension, surface adhesion and convection. The laser then steers the microbubble to move the nanoparticle to a site on the surface. When the laser is turned off, the microbubble disappears, leaving the particle on the surface. If necessary, the researchers can expand or reduce the size of the microbubble by increasing or decreasing the laser beam's power.
"The ability to control a single nanoparticle and fix it to a substrate without damaging it could open up great opportunities for the creation of new materials and devices," assistant professor, Yuebing Zheng said. "The capability of arranging the particles will help to advance a class of materials, known as metamaterials, with properties and functions that do not exist in current natural materials."
According to Prof Zheng, bubble-pen lithography can leverage a design software program in the same way as a 3D printer, so it can deposit nanoparticles in real time in a pre-programmed pattern or design. The researchers were able to write the UT Austin Longhorn symbol and create a dome shape out of nanoparticle beads.
In comparison to other existing lithography methods, bubble-pen lithography has several advantages, Prof Zheng says. First, the technique can be used to test prototypes and ideas for devices and materials more quickly. Second, the technique has the potential for large-scale, low-cost manufacturing of nanomaterials and devices. Other lithography techniques require more resources and a clean room environment.
Prof Zheng hopes to advance bubble-pen lithography by developing a multiple-beam processing technique for industrial-level production of nanomaterials and nanodevices. He is also planning to develop a portable version of the technique that works like a mobile phone for use in prototyping.

Author
Tom Austin-Morgan