ads

Affichage des articles dont le libellé est Panorama. Afficher tous les articles
Affichage des articles dont le libellé est Panorama. Afficher tous les articles

jeudi 21 janvier 2016

History of Robotic art1





Although the science of roboticsonly came about in the 20th century, the history of human-invented automation has a much lengthier past. In fact, the ancient Greek engineer Hero of Alexandria, produced two texts,Pneumatica andAutomata, that testify to the existence of hundreds of different kinds of “wonder” machines capable of automated movement. Of course, robotics in the 20th and 21st centuries has advanced radically to include machines capable of assembling other machines and even robots that can be mistaken for human beings.
The word robotics was inadvertently coined by science fiction author Isaac Asimov in his 1941 story “Liar!” Science fiction authors throughout history have been interested in man’s capability of producing self-motivating machines and lifeforms, from the ancient Greek myth of Pygmalion to Mary Shelley’s Dr. Frankenstein and Arthur C. Clarke’s HAL 9000. Essentially, a robot is a re-programmable machine that is capable of movement in the completion of a task. Robots use special coding that differentiates them from other machines and machine tools, such as CNC. Robots have found uses in a wide variety of industries due to their robust resistance capabilities and precision function.
 
Historical Robotics
Many sources attest to the popularity of automatons in ancient and Medieval times. Ancient Greeks and Romans developed simple automatons for use as tools, toys, and as part of religious ceremonies. Predating modern robots in industry, the Greek God Hephaestus was supposed to have built automatons to work for him in a workshop. Unfortunately, none of the early automatons are extant. 
In the Middle Ages, in both Europe and the Middle East, automatons were popular as part of clocks and religious worship. The Arab polymath Al-Jazari (1136-1206) left texts describing and illustrating his various mechanical devices, including a large elephant clock that moved and sounded at the hour, a musical robot band and a waitress automaton that served drinks. In Europe, there is an automaton monk extant that kisses the cross in its hands. Many other automata were created that showed moving animals and humanoid figures that operated on simple cam systems, but in the 18th century, automata were understood well enough and technology advanced to the point where much more complex pieces could be made. French engineer Jacques de Vaucanson is credited with creating the first successful biomechanical automaton, a human figure that plays a flute. Automata were so popular that they traveled Europe entertaining heads of state such as Frederick the Great and Napoleon Bonaparte.
 
Victorian Robots
 
The Industrial Revolution and the increased focus on mathematics, engineering and science in England in the Victorian age added to the momentum towards actual robotics. Charles Babbage (1791-1871) worked to develop the foundations of computer science in the early-to-mid nineteenth century, his most successful projects being the difference engine and the analytical engine. Although never completed due to lack of funds, these two machines laid out the basics for mechanical calculations. Others such as Ada Lovelace recognized the future possibility of computers creating images or playing music.
Automata continued to provide entertainment during the 19th century, but coterminous with this period was the development of steam-powered machines and engines that helped to make manufacturing much more efficient and quick. Factories began to employ machines to either increase work loads or precision in the production of many products. 

The Twentieth Century to Today

 In 1920, Karel Capek published his play R.U.R. (Rossum’s Universal Robots), which introduced the word “robot.” It was taken from an old Slavic word that meant something akin to “monotonous or forced labor.” However, it was thirty years before the first industrial robot went to work. In the 1950s, George Devol designed the Unimate, a robotic arm device that transported die castings in a General Motors plant in New Jersey, which started work in 1961. Unimation, the company Devol founded with robotic entrepreneur Joseph Engelberger, was the first robot manufacturing company. The robot was originally seen as a curiosity, to the extent that it even appeared on The Tonight Show in 1966. Soon, robotics began to develop into another tool in the industrial manufacturing arsenal.
 
Robotics became a burgeoning science and more money was invested. Robots spread to Japan, South Korea and many parts of Europe over the last half century, to the extent that projections for the 2011 population of industrial robots are around 1.2 million. Additionally, robots have found a place in other spheres, as toys and entertainment, military weapons, search and rescue assistants, and many other jobs. Essentially, as programming and technology improve, robots find their way into many jobs that in the past have been too dangerous, dull or impossible for humans to achieve. Indeed, robots are being launched into space to complete the next stages of extraterrestrial and extrasolar research.

What is a programming computer







Aren't Programmers Just Nerds?:
Programming is a creative process done by programmers to instruct a computer on how to do a task. Hollywood has helped instill an image of programmers as uber techies who can sit down at a computer and break any password in seconds or make highly tuned warp engines improve performance by 500% with just one tweak. Sadly the reality is far less interesting!
  • Defiunition of Program
  • What is a Programming Language?
  • What is Software?
So Programming Is Boring? No!:
Computers can be programmed to do interesting things. In the UK, a system has been running for several years that reads car number plates. The car is seen by a camera and the image captured then instantly processed so that the number plate details are extracted, run through a national car registration database of number plates and any stolen vehicle etc alerts for that vehicle flagged up within four seconds.
With the right attachments, a computer could be programmed to perform dentistry. Testing that would be interesting and might be a bit scary!
Two Different Types Of Software:
Older computers, generally those with black and white displays and no mouse tend to run consoleapplications. There are still plenty of these about, they are very popular for rapid data entry.
The other type of applications require a mouse and are called GUI programs or event driven programming. These are seen on Windows PCs, Linux PCs and Apple Macs. Programming these applications is a bit harder than for console but newer programming languages like these have simplified it.
  • Visual Basic
  • Delphi
  • C#
What Do Programs Do?:
Fundamentally programs manipulate numbers and text. These are the building blocks of all programs. Programming languages let you use them in different ways, eg adding numbers, etc, or storing data on disk for later retrieval.
These numbers and text are called variables and can be handled singly or in structured collections. In C++, a variable can be used to count numbers, or a struct) variable hold payroll details for an employee such as
  • Name
  • Salary
  • Company Id Number
  • Total Tax Paid
  • SSN
A database can hold millions of these records and fetch them very rapidly.
Programs Must Be Written For An Operating System:
Programs don't exist by themselves but need operating system, unless they are the operating system!
Win 32
Linux
Mac
Before Java, programs needed rewriting for each operating system. A program that ran on a Linux box could not run on a Windows box or a Mac. With Java it is now far easier to write a program once then run it everywhere as it is compiled to a common code calledbytecode which is then interpreted. Each operating system has a Java interpreter, called a Java Virtual Machine (JVM) written for it and knows how to interpret bytecode. C# has something similar.
Programs Use Operating Systems Code:
Unless you're selling software and want to run it on every different operating system, you are more likely to need to modify it for new versions of the same operating system. Programs use features provided by the operating system and if those change then the program must change or it will break.
Many applications written for Windows 2000 or XP use the Local Machine part of the registry. Under Windows Vista this will cause problems and Microsoft is advising people to rewrite code affected by this. Microsoft have done this to make Vista more secure.
Computers Can Talk To Other Computers:
When connected in a network, they can even run programs on each other or transfer data via ports. Programs you write can also do this. This makes programming a little harder as you have to cope with situations like
  • When a network cable is pulled out.
  • Another networked PC is switched off.
Some advanced programming languages let you write programs that run their parts on different computers. This only works if the problem can use parallelism. Some problems cannot be divided this way:
  • Nine women cannot produce one child between them in just one month!
Programming Peripherals attached to your Computer:
If you have a peripheral, say a computer controlled video camera, it will come with a cable that hooks it up to the PC and some interfacing software to control it. It may also come with
  • API
  • SDK
that lets you write software to control it. You could then program it to switch on and record during the hours when you are out of the house. If your PC can read sound levels from the microphone then you might write code that starts the camera recording when the sound level is above a limit that you specified. Many peripherals can be programmed like this.
Games Are Just Programs:
Games on PCs use special libraries :
  • DirectX
  • XNA
  • SDL
So they can write to the display hardware very rapidly. Games screens update at over 60 times per seconds so 3D games software has to move everything in 3D space, detect collisions etc then render the 3D view onto a flat surface (the screen!) 60 times each second. That's a very short period of time but video card hardware now does an increasing amount of the rendering work. The GPU chips are optimized for fast rendering and can do these operations up to 10x faster than a CPU can, even with the fastest software.
Conclusion:
Many programmers write software as a creative outlet. The web is full of websites with source codedeveloped by amateur programmers who did it for the heck of it and are happy to share their code. Linuxstarted this way when Linus Torvalds shared code that he had written.
The intellectual effort in writing a medium sized program is probably comparable to writing a book, except you never need to debug a book! There is a joy to finding out new ways to make something happen, or solving a particularly thorny problem. If your programming skills are good enough then you could get a full-time job as a programmer.


 https://www.blogger.com/manage-blogs-following.g

What is TCP/IP and How Does It Make the Internet Work?

TCP/IP – A Brief Explanation
the Internet works by using a protocol called TCP/IP, or Transmission Control Protocol/Internet Protocol. TCP/IP is the underlying communication language of the Internet. In base terms, TCP/IP allows one computer to talk to another computer via the Internet through compiling packets of data and sending them to right location.For those who don’t know, a packet, sometimes more formally referred to as a network packet, is a unit of data transmitted from one location to another. Much like the atom is the smallest unit of a cell, a packet is the smallest unit of transmitted information over the Internet.
Defining TCP
As indicated in the name, there are two layers to TCP/IP. The top layer, TCP, is responsible for taking large amounts of data, compiling it into packets and sending them on their way to be received by a fellow TCP layer, which turns the packets into useful information/data.
Defining IP
The bottom layer, IP, is the locational aspect of the pair allowing the packets of information to be sent and received to the correct location. If you think about IP in terms of a map, the IP layer serves as the packet GPS to find the correct destination. Much like a car driving on a highway, each packet passes through a gateway computer (signs on the road), which serve to forward the packets to the right destination.
In summary, TCP is the data. IP is the Internet location GPS.
That is how the Internet works on the surface. Let’s take a look below the surface at the abstraction layers of the Internet.
The Four Abstraction Layers Embedded in TCP/IP
The four abstraction layers are the link layer (lowest layer), the Internet layer, the transport layer and the application layer (top layer).
They work in the following fashion:
  1. The Link Layer is the physical network equipment used to interconnect nodes and servers.
  2. The Internet Layer connects hosts to one another across networks.
  3. The Transport Layer resolves all host-to-host communication.
  4. The Application Layer is utilized to ensure communication between applications on a network.
In English, the four abstraction layers embedded in TCP/IP allow packets of data, application programs and physical network equipment to communicate with one another over the Internet to ensure packets are sent intact and to the correct location.
The Four Abstraction Layers Embedded in TCP/IP
Now that you know the base definition of TCP/IP and how the Internet works, we need to discuss why all of this matters.
The Internet is About Communication and Access
The common joke about the Internet is it is a series of tubes where data is sent and received at different locations. The analogy isn’t bad. However, it isn’t complete.
The Internet is more like a series of tubes with various connection points, various transmission points, various send/receive points, various working speeds and a governing body watching over the entire process.
To understand why TCP/IP is needed, here’s a quick example.
I live in Gainesville, Florida. However, because I once lived in Auckland, New Zealand, for an extended period of time, I enjoy checking the local New Zealand news on a weekly basis.
To do this, I read The New Zealand Herald. To do this, I visit www.nzhearald.co.nz. As you might have guessed from the URL, The New Zealand Herald is digitally based in New Zealand (i.e. the other side of the world from Gainesville).
The Amount of Hops For Packets to Be Transmitted
For the connection to be made from my computer located in Gainesville to a server hosting The New Zealand Herald based in New Zealand, packets of data have to be sent to multiple data centers through multiple gateways and through multiple verification channels to ensure my request finds the right destination.
The common Internet parlance for this is finding out how many hops it takes for one packet of information to be sent to another location.
Running a trace route can show you the amount of hops along the way. If you are wondering, there are 17 hops between my location in Gainesville and the server hosting the The New Zealand Herald website.
TCP/IP is needed to ensure that information reaches its intended destination. Without TCP/IP, packets of information would never arrive where they need to be and the Internet wouldn’t be the pool of useful information that we know it to be today.

dimanche 17 janvier 2016

How the binary numeric system works

 
 
 
 
Learning how the binary numeric system works may seem like an overwhelming task, but the system itself is actually relatively easy.
The Basic Concepts of Binary Numeric Systems and Codes: 
The traditional numeric system is based on ten characters. Each one can be repeated however many times is necessarily in order to express a certain quantity or value. Binary numbers work on basically the same principle, but instead of ten characters they make use of only two. The characters of “1” and “0” can be combined to express all the same values as their more traditional counterparts.
With only two characters in use, the combination of them can seem a bit more awkward than a conventional numeric system. With each character only able to represent a basic “on” or “off” in the position that it occupies, they can still be combined, just like conventional numbers that hold a certain place within a numeric expression, in such a way that they will represent any number that is needed to complete an expression, sequence or equation.
  
Electronic Memory Storage and Binary Numbers:

Electronic data storage, like that used in computers or similar devices, operates based on minute electrical and magnetic charges. The challenge of converting this principle into a workable way to express numbers reveals the advantage offered by a numeric system based on the simple concept of “on” or “off”. Each individual character is called a bit, and will be either a “1” or a “0” depending on the presence or absence of an electromagnetic charge.

While unwieldy for use with any system other than a computational device capable of reading and making use of the numbers at terrific speeds, this system is ideal for electronic and computational devices. Used in far more than just your personal computer, the binary numeric system is at the heart of any number of electronic devices that possesses even a simplistic degree of sophistication. Learning more about this system and its uses can hold plenty of advantages for programmers, students of mathematics and anyone with a keen interest to learn more about the world around them.

 

Binary Numeric System Uses:

The first computers were analog machines that did not need electricity to function. Even so, they were able to make effective use of the earliest practical examples of the binary numeric system. The addition of electricity to their capacities and the use of primitive components like vacuum tubes allowed for the earliest generation of computers to advance rapidly in terms of applications and performance.

What is binary code, the history behind it and popular uses


  
  

            All computer language is based in binary code. It is the back end of all computer functioning. Binary numbers means that there is a code of either 0 or 1 for a computer to toggle between. All computer functions will rapidly toggle between 00 or 01 at an incomprehensible speed. This is how computers have come to assist humans in tasks that would take so much longer to complete. The human brain functions holistically at much more rapid speeds than a computer in doing other types of very complicated tasks, such as reasoning and analytical thought processes.The code in a computer language, with regard to text that a central processing unit or CPU of a computer will read, is based in ASCII strings that are standardized with strings of zeros and ones that represent each letter of the alphabet or numbers. ASCII stands for American Standard Code Information Interchange, which is a standard of 7 bit binary codes that will translate into computer logic to represent text, letters and symbols that humans will recognize. There are from 0 to 127 numbers or letters represented in the ASCII system.

              Each binary string has eight binary bits that will look like a bunch of zeros and ones in a certain pattern unique for each letter of a word. With this type of code, 256 different possible values can be represented for the large group of symbols, letters and operating instructions that can be given to the mainframe. From these codes are derived character strings and then bit strings. Bit strings can represent decimal numbers.

           The binary numbers can be found in the great Vedic literatures, the shastras, written in the first language of mankind, Sanskrit, more specifically located in the ChandahSutra and originally committed to text by Pingala around the 4th Century. This is an estimation, as Sanskrit was a language that was only sung many years before mankind had a need to write on paper. Before the need to write on paper, mankind had highly developed memory and so the need to write was not even part of life at that time.

        Counterintuitively or surprisingly, in more modern historical documents it is noted that mankind has progressed beyond Sanskrit. There were no written texts as important information was recited verbally. There were no textbooks prior to the creation of binary code, as they were not required. According to the Shastras, mankind became less fortunate and the memory began to decline, requiring texts and books to be created for keeping track of important information. Once this was a necessity, the binary code was first traced to these great texts and then long after that, around the 17th century, the great philosopher and father of Calculus, Gottfried Leibniz derived a system of logic for verbal statements that would be completely represented in a mathematical code. He was theorizing that life could be reduced to simple codes of rows of combinations of zeros and ones. Not actually knowing what this system would be used for, eventually, with the help of George Boole, Boolean logic was developed, using the on/off system of zeros and ones for basic algebraic operations. The on or off codes can rapidly be implemented by computers for doing seemingly unlimited numbers of applications. All computer language is based in the binary system of logic.

What is Nanotechnology?




The scientific field of nanotechnology is still evolving, and there doesn’t seem to be one definition that everybody agrees on. It is known that nano deals with matter on a very small scale: larger than atoms but smaller than a breadcrumb. It is also known that matter at the nano scale can behave differently than bulk matter. Beyond that, individuals and groups focus on different aspects of nanotechnology.
Here are a few definitions of nanotechnology for your consideration.
The following definition is probably the most barebones and generally agreed upon:
Nanotechnology is the study and use of structures between 1 nanometer (nm) and 100 nanometers in size. To put these measurements in perspective, you would have to stack 1 billion nanometer-sized particles on top of each other to reach the height of a 1-meter-high (about 3-feet 3-inches-high) hall table. Another popular comparison is that you can fit about 80,000 nanometers in the width of a single human hair.
The next definition is from the Foresight Institute and adds a mention of the various fields of science that come into play with nanotechnology:
Structures, devices, and systems having novel properties and functions due to the arrangement of their atoms on the 1 to 100 nanometer scale. Many fields of endeavor contribute to nanotechnology, including molecular physics, materials science, chemistry, biology, computer science, electrical engineering, and mechanical engineering.
  The European Commission offers the following definition, which both repeats the fact mentioned in the previous definition that materials at the nanoscale have novel properties, and positions nano vis-à-vis its potential in the economic marketplace:
Nanotechnology is the study of phenomena and fine-tuning of materials at atomic, molecular and macromolecular scales, where properties differ significantly from those at a larger scale. Products based on nanotechnology are already in use and analysts expect markets to grow by hundreds of billions of euros during this decade.
  This next definition from the National Nanotechnology Initiative adds the fact that nanotechnology involves certain activities, such as measuring and manipulating nanoscale matter:
 Nanotechnology is the understanding and control of matter at dimensions between approximately 1 and 100 nanometers, where unique phenomena enable novel applications. Encompassing nanoscale science, engineering, and technology, nanotechnology involves imaging, measuring, modeling, and manipulating matter at this length scale
 The last definition is from Thomas Theis, director of physical sciences at the IBM Watson Research Center. It offers a broader and interesting perspective of the role and value of nanotechnology in our world:
[Nanotechnology is] an upcoming economic, business, and social phenomenon. Nano-advocates argue it will revolutionize the way we live, work and communicate.



During the Middle Ages, philosophers attempted to transmute base materials into gold in a process called alchemy. While their efforts proved fruitless, the pseudoscience alchemy paved the way to the real science of chemistry. Through chemistry, we learned more about the world around us, including the fact that all matter is composed of atoms. The types of atoms and the way those atoms join together determines a substance's properties.

Nanotechnology is a multidisciplinary science that looks at how we can manipulate matter at the molecular and atomic level. To do this, we must work on the nanoscale -- a scale so small that we can't see it with a light microscope. In fact, one nanometer is just one-billionth of a meter in size. Atoms are smaller still. It's difficult to quantify an atom's size -- they don't tend to hold a particular shape. But in general, a typical atom is about one-tenth of a nanometer in diameter.

The quantum source of space-time





Warner Bros. Entertainment/Paramount Pictures

Black holes such as the one depicted in Interstellar (2014) can be connected by wormholes, which might have quantum origins.
In early 2009, determined to make the most of his first sabbatical from teaching, Mark Van Raamsdonk decided to tackle one of the deepest mysteries in physics: the relationship between quantum mechanics and gravity. After a year of work and consultation with colleagues, he submitted a paper on the topic to the Journal of High Energy Physics.
In April 2010, the journal sent him a rejection — with a referee’s report implying that Van Raamsdonk, a physicist at the University of British Columbia in Vancouver, was a crackpot.
His next submission, to General Relativity and Gravitation, fared little better: the referee’s report was scathing, and the journal’s editor asked for a complete rewrite.
Quantum‘spookiness’passes toughest test yet
But by then, Van Raamsdonk had entered a shorter version of the paper into a prestigious annual essay contest run by the Gravity Research Foundation in Wellesley, Massachusetts. Not only did he win first prize, but he also got to savour a particularly satisfying irony: the honour included guaranteed publication in General Relativity and Gravitation. The journal published the shorter essay in June 2010.
Still, the editors had good reason to be cautious. A successful unification of quantum mechanics and gravity has eluded physicists for nearly a century. Quantum mechanics governs the world of the small — the weird realm in which an atom or particle can be in many places at the same time, and can simultaneously spin both clockwise and anticlockwise. Gravity governs the Universe at large — from the fall of an apple to the motion of planets, stars and galaxies — and is described by Albert Einstein’s general theory of relativity, announced 100 years ago this month. The theory holds that gravity is geometry: particles are deflected when they pass near a massive object not because they feel a force, said Einstein, but because space and time around the object are curved.
Both theories have been abundantly verified through experiment, yet the realities they describe seem utterly incompatible. And from the editors’ standpoint, Van Raamsdonk’s approach to resolving this incompatibility was  strange. All that’s needed, he asserted, is ‘entanglement’: the phenomenon that many physicists believe to be the ultimate in quantum weirdness. Entanglement lets the measurement of one particle instantaneously determine the state of a partner particle, no matter how far away it may be — even on the other side of the Milky Way.
Einstein loathed the idea of entanglement, and famously derided it as “spooky action at a distance”. But it is central to quantum theory. And Van Raamsdonk, drawing on work by like-minded physicists going back more than a decade, argued for the ultimate irony — that, despite Einstein’s objections, entanglement might be the basis of geometry, and thus of Einstein’s geometric theory of gravity. “Space-time,” he says, “is just a geometrical picture of how stuff in the quantum system is entangled.”
“I had understood something that no one had understood before.”
This idea is a long way from being proved, and is hardly a complete theory of quantum gravity. But independent studies have reached much the same conclusion, drawing intense interest from major theorists. A small industry of physicists is now working to expand the geometry–entanglement relationship, using all the modern tools developed for quantum computing and quantum information theory.

Einstein was no lone genius
“I would not hesitate for a minute,” says physicist Bartłomiej Czech of Stanford University in California, “to call the connections between quantum theory and gravity that have emerged in the last ten years revolutionary.”

Gravity without gravity

Much of this work rests on a discovery2 announced in 1997 by physicist Juan Maldacena, now at the Institute for Advanced Study in Princeton, New Jersey. Maldacena’s research had led him to consider the relationship between two seemingly different model universes. One is a cosmos similar to our own. Although it neither expands nor contracts, it has three dimensions, is filled with quantum particles and obeys Einstein’s equations of gravity. Known as anti-de Sitter space (AdS), it is commonly referred to as the bulk. The other model is also filled with elementary particles, but it has one dimension fewer and doesn’t recognize gravity. Commonly known as the boundary, it is a mathematically defined membrane that lies an infinite distance from any given point in the bulk, yet completely encloses it, much like the 2D surface of a balloon enclosing a 3D volume of air. The boundary particles obey the equations of a quantum system known as conformal field theory (CFT).
Maldacena discovered that the boundary and the bulk are completely equivalent. Like the 2D circuitry of a computer chip that encodes the 3D imagery of a computer game, the relatively simple, gravity-free equations that prevail on the boundary contain the same information and describe the same physics as the more complex equations that rule the bulk.
“It’s kind of a miraculous thing,” says Van Raamsdonk. Suddenly, he says, Maldacena’s duality gave physicists a way to think about quantum gravity in the bulk without thinking about gravity at all: they just had to look at the equivalent quantum state on the boundary. And in the years since, so many have rushed to explore this idea that Maldacena’s paper is now one of the most highly cited articles in physics.

Quantum weirdness:What's really real?
Among the enthusiasts was Van Raamsdonk, who started his sabbatical by pondering one of the central unsolved questions posed by Maldacena’s discovery: exactly how does a quantum field on the boundary produce gravity in the bulk? There had already been hints that the answer might involve some sort of relation between geometry and entanglement. But it was unclear how significant these hints were: all the earlier work on this idea had dealt with special cases, such as a bulk universe that contained a black hole. So Van Raamsdonk decided to settle the matter, and work out whether the relationship was true in general, or was just a mathematical oddity.
He first considered an empty bulk universe, which corresponded to a single quantum field on the boundary. This field, and the quantum relationships that tied various parts of it together, contained the only entanglement in the system. But now, Van Raamsdonk wondered, what would happen to the bulk universe if that boundary entanglement were removed?
He was able to answer that question using mathematical tools introduced in 2006 by Shinsei Ryu, now at the University of Illinois at Urbana–Champaign, and Tadashi Takanagi, now at the Yukawa Institute for Theoretical Physics at Kyoto University in Japan. Their equations allowed him to model a slow and methodical reduction in the boundary field’s entanglement, and to watch the response in the bulk, where he saw space-time steadily elongating and pulling apart (see ‘The entanglement connection’). Ultimately, he found, reducing the entanglement to zero would break the space-time into disjointed chunks, like chewing gum stretched too far.

NIK SPENCER/NATURE
The geometry–entanglement relationship was general, Van Raamsdonk realized. Entanglement is the essential ingredient that knits space-time together into a smooth whole — not just in exotic cases with black holes, but always.
“I felt that I had understood something about a fundamental question that perhaps nobody had understood before,” he recalls: “Essentially, what is space-time?”

Entanglement and Einstein


The origins of space and time
Quantum entanglement as geometric glue — this was the essence of Van Raamsdonk’s rejected paper and winning essay, and an idea that has increasingly resonated among physicists. No one has yet found a rigorous proof, so the idea still ranks as a conjecture. But many independent lines of reasoning support it.
In 2013, for example, Maldacena and Leonard Susskind of Stanford published a related conjecture that they dubbed ER = EPR, in honour of two landmark papers from 1935. ER, by Einstein and American-Israeli physicist Nathan Rosen, introduced what is now called a wormhole: a tunnel through space-time connecting two black holes. (No real particle could actually travel through such a wormhole, science-fiction films notwithstanding: that would require moving faster than light, which is impossible.) EPR, by Einstein, Rosen and American physicist Boris Podolsky, was the first paper to clearly articulate what is now called entanglement.
Maldacena and Susskind’s conjecture was that these two concepts are related by more than a common publication date. If any two particles are connected by entanglement, the physicists suggested, then they are effectively joined by a wormhole. And vice versa: the connection that physicists call a wormhole is equivalent to entanglement. They are different ways of describing the same underlying reality.
No one has a clear idea of what this under­lying reality is. But physicists are increasingly convinced that it must exist. Maldacena, Susskind and others have been testing the ER = EPR hypothesis to see if it is mathematically consistent with everything else that is known about entanglement and wormholes — and so far, the answer is yes.

Hidden connections


Theoretical physics: Complexity on the horizon
Other lines of support for the geometry–entanglement relationship have come from condensed-matter physics and quantum information theory: fields in which entanglement already plays a central part. This has allowed researchers from these disciplines to attack quantum gravity with a whole array of fresh concepts and mathematical tools.
Tensor networks, for example, are a technique developed by condensed-matter physicists to track the quantum states of huge numbers of subatomic particles. Brian Swingle was using them in this way in 2007, when he was a graduate student at the Massachusetts Institute of Technology (MIT) in Cambridge, calculating how groups of electrons interact in a solid mat­erial. He found that the most useful network for this purpose started by linking adjacent pairs of electrons, which are most likely to interact with each other, then linking larger and larger groups in a pattern that resembled the hierarchy of a family tree. But then, during a course in quantum field theory, Swingle learned about Maldacena’s bulk–boundary correspondence and noticed an intriguing pattern: the mapping between the bulk and the boundary showed exactly the same tree-like network.
“You can think of space as being built from entanglement.”
Swingle wondered whether this resemblance might be more than just coincidence. And in 2012, he published calculations showing that it was: he had independently reached much the same conclusion as Van Raamsdonk, thereby adding strong support to the geometry–entanglement idea. “You can think of space as being built from entanglement in this very precise way using the tensors,” says Swingle, who is now at Stanford and has seen tensor networks become a frequently used tool to explore the geometry–entanglement correspondence.
Another prime example of cross-fertilization is the theory of quantum error-correcting codes, which physicists invented to aid the construction of quantum computers. These machines encode information not in bits but in ‘qubits’: quantum states, such as the up or down spin of an electron, that can take on values of 1 and 0 simultaneously. In principle, when the qubits interact and become entangled in the right way, such a device could perform calculations that an ordinary computer could not finish in the lifetime of the Universe. But in practice, the process can be incredibly fragile: the slightest disturbance from the outside world will disrupt the qubits’ delicate entanglement and destroy any possibility of quantum computation.
That need inspired quantum error-correcting codes, numerical strategies that repair corrupted correlations between the qubits and make the computation more robust. One hallmark of these codes is that they are always ‘non-local’: the information needed to restore any given qubit has to be spread out over a wide region of space. Otherwise, damage in a single spot could destroy any hope of recovery. And that non-locality, in turn, accounts for the fascination that many quantum information theorists feel when they first encounter Maldacena’s bulk–boundary correspondence: it shows a very similar kind of non-locality. The information that corresponds to a small region of the bulk is spread over a vast region of the boundary.

Nature special: General relativity at 100
“Anyone could look at AdS–CFT and say that it’s sort of vaguely analogous to a quantum error-correcting code,” says Scott Aaronson, a computer scientist at MIT. But in work published in June9, physicists led by Daniel Harlow at Harvard University in Cambridge and John Preskill of the California Institute of Technology in Pasadena argue for something stronger: that the Maldacena duality is itself a quantum error-correcting code. They have demonstrated that this is mathematically correct in a simple model, and are now trying to show that the assertion holds more generally.
“People have been saying for years that entanglement is somehow important for the emergence of the bulk,” says Harlow. “But for the first time, I think we are really getting a glimpse of how and why.”

Beyond entanglement

That prospect seems to be enticing for the Simons Foundation, a philanthropic organization in New York City that announced in August that it would provide US$2.5 million per year for at least 4 years to help researchers to move forward on the gravity–quantum information connection. “Information theory provides a powerful way to structure our thinking about fundamental physics,” says Patrick Hayden, the Stanford physicist who is directing the programme. He adds that the Simons sponsorship will support 16 main researchers at 14 institutions worldwide, along with students, postdocs and a series of workshops and schools. Ultimately, one major goal is to build up a comprehensive dictionary for translating geometric concepts into quantum language, and vice versa. This will hopefully help physicists to find their way to the complete theory of quantum gravity.
Still, researchers face several challenges. One is that the bulk–boundary correspondence does not apply in our Universe, which is neither static nor bounded; it is expanding and apparently infinite. Most researchers in the field do think that calculations using Maldacena’s correspondence are telling them something true about the real Universe, but there is little agreement as yet on exactly how to translate results from one regime to the other.
Another challenge is that the standard definition of entanglement refers to particles only at a given moment. A complete theory of quantum gravity will have to add time to that picture. “Entanglement is a big piece of the story, but it’s not the whole story,” says Susskind.
He thinks physicists may have to embrace another concept from quantum information theory: computational complexity, the number of logical steps, or operations, needed to construct the quantum state of a system. A system with low complexity is analogous to a quantum computer with almost all the qubits on zero: it is easy to define and to build. One with high complexity is analogous to a set of qubits encoding a number that would take aeons to compute.
Susskind’s road to computational complexity began about a decade ago, when he noticed that a solution to Einstein’s equations of general relativity allowed a wormhole in AdS space to get longer and longer as time went on. What did that correspond to on the boundary, he wondered? What was changing there? Susskind knew that it couldn’t be entanglement, because the correlations that produce entanglement between different particles on the boundary reach their maximum in less than a second. In an article last year, however, he and Douglas Stanford, now at the Institute for Advanced Study, showed that as time progressed, the quantum state on the boundary would vary in exactly the way expected from computational complexity.

Quantum quest: Reinventing quantum theory
“It appears more and more that the growth of the interior of a black hole is exactly the growth of computational complexity,” says Susskind. If quantum entanglement knits together pieces of space, he says, then computational complexity may drive the growth of space — and thus bring in the elusive element of time. One potential consequence, which he is just beginning to explore, could be a link between the growth of computational complexity and the expansion of the Universe. Another is that, because the insides of black holes are the very regions where quantum gravity is thought to dominate, computational complexity may have a key role in a complete theory of quantum gravity.
Despite the remaining challenges, there is a sense among the practitioners of this field that they have begun to glimpse something real and very important. “I didn’t know what space was made of before,” says Swingle. “It wasn’t clear that question even had meaning.” But now, he says, it is becoming increasingly apparent that the question does make sense. “And the answer is something that we understand,” says Swingle. “It’s made of entanglement.

Quantum Relativity

Quantum Relativity Space, Time, and Gravity in a Quantum Universe by Mark Lawrence 


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
           Sir Isaac Newton, 1689                                                                                                                      AlbertEinstein, about 1947




Quantum Relativity is part of the on-going story of man's attempt to understand the rules of the universe, particularly the laws of gravity.

Gravity

In 1686, Sir Isaac Newton published his great work, Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy). In this book, Newton released his theory of gravity, the first mathematical theory of gravity ever. In order to create his theory, first Newton had to invent a new form of mathematics, Calculus.
Newton was never satisfied with his theory. The idea that the Earth pulls on the Moon with no visible or mediating agent is called action at a distance. Newton never thought this idea was credible, but he was unable to find any alternative.
Newton's theory of gravity is quite good - NASA uses it almost exclusively for all orbital calculations, and it works just fine. However, very small numerical problems with his theory were found over the years. Also, Newton's theory was heavily criticised on philosophical grounds. Newton's theory presumed there were special observers, called "inertial observers," who were the only ones to see the laws of physics in their pure form.

Quantum Mechanics

In 1899, Max Planck first introduced the quantum hypothesis. Up until Planck, it was thought that anything could be divided into smaller and smaller segments without limit, but still retaining the basic characteristics of the substance. It's hard to understand this from today's viewpoint, over 100 years later, but in 1899 the idea of atoms was still controversial. Planck introduced the notion that the electro-magnetic field could only be made up of small indivisible units.
Quantum mechanics is a very strange theory, based on the observation that atomic particles often can be in one place or another, but not in between. Quantum mechanics was developed by several people, culminating in a comprehensive theory released by Werner Heisenberg in 1925 and Erwin Schroedinger in 1926. However, almost immediately it was realized that quantum mechanics was an incomplete theory. Quantum mechanics could not accurately account for electro-magnetic forces, and quantum mechanics violated the laws of special relativity.
In 1948, quantum mechanics was replaced by a new theory simultaneously developed by Richard Feynman, Sun-Itiro Tomonaga, and Julian Schwinger, called Quantum Field Theory. Quantum Field Theory remains today our best theory of electro-magnetic forces, and is our current theory for the nuclear force and the radioactive force, more often called the strong and weak forces.

Relativity

In 1905, Albert Einstein shocked the world with three papers. Before he published these papers, Einstein was a clerk in the Swiss Patent office - he had graduated from college with a bachelors (4 year) degree, but his professors considered him a rather indifferent student who was not talented enough to warrant a position in graduate school to pursue a Doctorate degree. Einstein's three papers were:
  • Brownian Motion - after this paper was published, everyone agreed that matter was made up of atoms. The atomic theory is perhaps the most fundamental part of quantum mechanics.
  • The Photo-Electric effect - in this paper, Einstein coined the work "photon," and put us firmly on the road to quantum mechanics.
  • Special Relativity - in this paper, Einstein explained that the speed of light was an absolute constant. Everyone who measures the speed of light will get the same number, regardless of how fast they are moving and how fast the light source is moving, and nothing can go faster than light. Special relativity tells us that space and time do not exist as separate entities, as Newton thought, but rather as one union, which we call space-time.
Any one of these three papers would have been enough to ensure that Einstein became known as a superb physicist. The three papers published in one summer were enough to set him aside as someone special. However, Einstein was not by any means done creating.
Einstein realized almost immediately that his theory of Special Relativity had a serious flaw: gravity could not co-exist with his new theory of space and time. So, almost immediately Einstein set out to find a new theory of gravity, a theory to replace Newton's. In 1916, 11 years later after special relativity and 230 years after Newton, Einstein published his theory of gravity, the General Theory of Relativity. In order to create this theory of gravity, Einstein had to change our notions of space and time yet again. Einstein had to postulate that we lived in a curved space-time, just as we live on the curved surface of the Earth. Einstein showed that there were no such things as Newton's inertial observers. Also, General Relativity is what is called a field theory, so Newton's spooky action at a distance was also gone.
Just as Newton was never satisfied with his theory of gravity, Einstein was never satisfied with General Relativity. Einstein was disturbed by two problems: he believed that there should be just one theory to account for both gravity and electro- magnetism, and he believed that this "unified field" theory should get rid of quantum mechanics. Although Einstein himself helped create quantum mechanics, he hated quantum mechanics until his death. One interpretation of quantum mechanics is that everything is uncertain, and everything is fundamentally governed by the laws of probability. Einstein particularly despised this notion, frequently asserting "God does not throw dice!"

Quantum Relativity

Gravity as we currently understand it cannot be reconciled with the laws of quantum mechanics. Since 1930, people have tried to invent a theory of quantum gravity. I believe Enrico Fermi was the first to propose a theory of quantum gravity, in 1931. However, Fermi's theory predicted that all forces were infinite, and therefore the universe could not exist. Most physicists think the universe does in fact exist, so it was thought that the theory of quantum gravity had some serious problems.
Shortly after quantum field theory was invented, people started trying to invent a quantum field theory of gravity. Very quickly, it was shown that this is impossible: there can be no theory of gravity which obeys the rules of quantum field theory. The quantum theory of fields simply will not work for a force with the properties of gravity. It was recognized that a completely new type of theory was required. Since this theory does not currently exist, no one is certain exactly what it looks like. However, most people presume we need a new theory of space and time which will be compatible with the laws of quantum mechanics as we know them, and somehow allow a theory of quantum gravity to exist. This new theory of space and time is often called Quantum Relativity.
This web site is all about these theories.