Showing posts with label quantum computing. Show all posts
Showing posts with label quantum computing. Show all posts

Thursday, June 19, 2014

Jeremy O'Brien: "Quantum Technologies"


Jeremy O'Brien spoke recently at Google on Quantum Technologies, a topic he has written on extensively [see his 2009 paper, with Furusawa and Vu ckovi c, Photonic Quantum Technologies]. This is interesting stuff - and likely to be the future of computing technology.

Jeremy O'Brien: "Quantum Technologies"

June 17, 2014


Jeremy O'Brien visited Google LA to deliver a talk: "Quantum Technologies." This talk took place on April 1, 2014.

Abstract:

The impact of quantum technology will be profound and far-reaching: secure communication networks for consumers, corporations and government; precision sensors for biomedical technology and environmental monitoring; quantum simulators for the design of new materials, pharmaceuticals and clean energy devices; and ultra-powerful quantum computers for addressing otherwise impossibly large datasets for machine learning-artificial intelligence applications. However, engineering quantum systems and controlling them is an immense technological challenge: they are inherently fragile; and information extracted from a quantum system necessarily disturbs the system itself. Despite these challenges a small number of quantum technologies are now commercially available. Delivering the full promise of these technologies will require a concerted quantum engineering effort jointly between academia and industry. We will describe our progress in the Centre for Quantum Photonics to delivering this promise using an integrated quantum photonics platform---generating, manipulating and interacting single particles of light (photons) in waveguide circuits on silicon chips.

Bio:

Jeremy O'Brien is professor of physics and electrical engineering and director of the Centre for Quantum Photonics (CQP). He received his Ph.D. in physics from the University of New South Wales in 2002 for experimental work on correlated and confined electrons in organic conductors, superconductors and semiconductor nanostructures, as well as progress towards the fabrication of a phosphorus in silicon quantum computer. As a research fellow at the University of Queensland (2001-2006) he worked on quantum optics and quantum information science with single photons. CQP's efforts are focused on the fundamental and applied quantum mechanics at the heart of quantum information science and technology, ranging from prototypes for scalable quantum computing to generalised quantum measurements, quantum control, and quantum metrology.

Saturday, March 01, 2014

John Martinis, "Design of a Superconducting Quantum Computer"


This Google Tech Talk is way on the geeky side, but as much of it as I could follow was really interesting.

Tech Talk: John Martinis, "Design of a Superconducting Quantum Computer"

Published on Feb 28, 2014 


John Martinis visited Google LA to give a tech talk: "Design of a Superconducting Quantum Computer." This talk took place on October 15, 2013.

Abstract:

Superconducting quantum computing is now at an important crossroad, where "proof of concept" experiments involving small numbers of qubits can be transitioned to more challenging and systematic approaches that could actually lead to building a quantum computer. Our optimism is based on two recent developments: a new hardware architecture for error detection based on "surface codes" [1], and recent improvements in the coherence of superconducting qubits [2]. I will explain how the surface code is a major advance for quantum computing, as it allows one to use qubits with realistic fidelities, and has a connection architecture that is compatible with integrated circuit technology. Additionally, the surface code allows quantum error detection to be understood using simple principles. I will also discuss how the hardware characteristics of superconducting qubits map into this architecture, and review recent results that suggest gate errors can be reduced to below that needed for the error detection threshold.

References

[1] Austin G. Fowler, Matteo Mariantoni, John M. Martinis and Andrew N. Cleland, PRA 86, 032324 (2012).
[2] R. Barends, J. Kelly, A. Megrant, D. Sank, E. Jeffrey, Y. Chen, Y. Yin, B. Chiaro, J. Mutus, C. Neill, P. O'Malley, P. Roushan, J. Wenner, T. C. White, A. N. Cleland and John M. Martinis, arXiv:1304:2322.

Bio:

John M. Martinis attended the University of California at Berkeley from 1976 to 1987, where he received two degrees in Physics: B.S. (1980) and Ph.D. (1987). His thesis research focused on macroscopic quantum tunneling in Josephson Junctions. After completing a post-doctoral position at the Commisiariat Energie Atomic in Saclay, France, he joined the Electromagnetic Technology division at NIST in Boulder. At NIST he was involved in understanding the basic physics of the Coulomb Blockade, and worked to use this phenomenon to make a new fundamental electrical standard based on counting electrons. While at NIST he also invented microcalorimeters based on superconducting sensors for x-ray microanalysis and astrophysics. In June of 2004 he moved to the University of California, Santa Barbara where he currently holds the Worster Chair. At UCSB, he has continued work on quantum computation. Along with Andrew Cleland, he was awarded in 2010 the AAAS science breakthrough of the year for work showing quantum behavior of a mechanical oscillator.

Tuesday, July 02, 2013

The Philosopher's Zone - AI: Think Again w/ Hubert Dreyfus and David Deutsch


In this week's episode of The Philosopher's Zone (on Australia's Radio National), Joe Gelonesi speaks with two philosophers - Hubert Dreyfus and David Deutsch - on the prospect of an artificial intelligence (AI).

Dreyfus has been critical of the AI enterprize since back in the 1950s and 60s, producing a list of the four basic assumptions of AI believers (via Wikipedia):
Dreyfus identified four philosophical assumptions that supported the faith of early AI researchers that human intelligence depended on the manipulation of symbols.[9] "In each case," Dreyfus writes, "the assumption is taken by workers in [AI] as an axiom, guaranteeing results, whereas it is, in fact, one hypothesis among others, to be tested by the success of such work."[10]

The biological assumption
The brain processes information in discrete operations by way of some biological equivalent of on/off switches.

In the early days of research into neurology, scientists realized that neurons fire in all-or-nothing pulses. Several researchers, such as Walter Pitts and Warren McCulloch, argued that neurons functioned similar to the way Boolean logic gates operate, and so could be imitated by electronic circuitry at the level of the neuron.[11] When digital computers became widely used in the early 50s, this argument was extended to suggest that the brain was a vast physical symbol system, manipulating the binary symbols of zero and one. Dreyfus was able to refute the biological assumption by citing research in neurology that suggested that the action and timing of neuron firing had analog components.[12] To be fair, however, Daniel Crevier observes that "few still held that belief in the early 1970s, and nobody argued against Dreyfus" about the biological assumption.[13]

The psychological assumption
The mind can be viewed as a device operating on bits of information according to formal rules.

He refuted this assumption by showing that much of what we "know" about the world consists of complex attitudes or tendencies that make us lean towards one interpretation over another. He argued that, even when we use explicit symbols, we are using them against an unconscious background of commonsense knowledge and that without this background our symbols cease to mean anything. This background, in Dreyfus' view, was not implemented in individual brains as explicit individual symbols with explicit individual meanings.

The epistemological assumption
All knowledge can be formalized.

This concerns the philosophical issue of epistemology, or the study of knowledge. Even if we agree that the psychological assumption is false, AI researchers could still argue (as AI founder John McCarthy has) that it was possible for a symbol processing machine to represent all knowledge, regardless of whether human beings represented knowledge the same way. Dreyfus argued that there was no justification for this assumption, since so much of human knowledge was not symbolic.

The ontological assumption
The world consists of independent facts that can be represented by independent symbols.

Dreyfus also identified a subtler assumption about the world. AI researchers (and futurists and science fiction writers) often assume that there is no limit to formal, scientific knowledge, because they assume that any phenomenon in the universe can be described by symbols or scientific theories. This assumes that everything that exists can be understood as objects, properties of objects, classes of objects, relations of objects, and so on: precisely those things that can be described by logic, language and mathematics. The question of what exists is called ontology, and so Dreyfus calls this "the ontological assumption:" If this is false, then it raises doubts about what we can ultimately know and on what intelligent machines will ultimately be able to help us to do.
For what it's worth, and coming as no surprise to readers of this blog, I tend to agree with Dreyfus's assessment of the four, false assumptions.

On the other side of the debate is David Deutsch, an expert in the field of quantum computation and creator of a description for a quantum Turing machine (he also has specified an algorithm designed to run on a quantum computer [2]).

Deutsch is also the articulator of a TOE (theory of everything), one that sounds intriguing (although it needs a couple of tweeks). Via Wikipedia:
In his 1997 book The Fabric of Reality: Towards a Theory of Everything (1998), Deutsch details his "Theory of Everything." It aims not at the reduction of everything to particle physics, but rather mutual support among multiversal, computational, epistemological, and evolutionary principles. His theory of everything is (weakly) emergentist rather than reductive.

There are "four strands" to his theory:
  1. Hugh Everett's many-worlds interpretation of quantum physics, "the first and most important of the four strands."
  2. Karl Popper's epistemology, especially its anti-inductivism and requiring a realist (non-instrumental) interpretation of scientific theories, as well as its emphasis on taking seriously those bold conjectures that resist falsification.
  3. Alan Turing's theory of computation, especially as developed in Deutsch's Turing principle, in which the Universal Turing machine is replaced by Deutsch's universal quantum computer. ("The theory of computation is now the quantum theory of computation.")
  4. Richard Dawkins's refinement of Darwinian evolutionary theory and the modern evolutionary synthesis, especially the ideas of replicator and meme as they integrate with Popperian problem-solving (the epistemological strand).
With that background, on to the podcast.

AI: think again

Sunday 30 June 2013

IMAGE: WHAT WILL IT TAKE TO CREATE TRUE ARTIFICIAL INTELLIGENCE? THE PHILOSOPHERS COULD HOLD THE KEY.(IMAGES ETC LTD/GETTY)

After decades of research, the thinking computer remains a distant dream. They can play chess, drive cars, and communicate with each other, but thinking like humans remains a step beyond. Now, the inventor of quantum computation, David Deutsch, has called for a wholesale change in thinking to break the impasse. Enter the philosophers.


Guests

Professor Hubert Dreyfus, University of California, Berkeley
David Deutsch, Centre for Quantum Computation, University of Oxford

Presenter: Joe Gelonesi
Producer: Diane Dean

Sunday, April 21, 2013

Living in a Quantum Game - Pablo Arrighi and Jonathan Grattage

From COSMOS Magazine, an interesting article on the development of quantum computing and its influence on currents trends in physics, namely that some are suggesting a shift away from the study of matter to a study of information.

Living in a quantum game

By Pablo Arrighi and Jonathan Grattage
COSMOS Magazine | 20 March 2013

For scientists in the field of quantum information, the swirling chaos of space and the delicate intricacies of life are nothing more than a game.


SUPPOSE YOU ARE at a dinner party in a fancy French restaurant. During a lull in the conversation, the person on your right – who is a friend of a friend – turns to you and asks: “what do you do for a living?”

Now, suppose you belong to the first generation of scientists who have studied for PhDs in quantum information. This new field combines aspects of computer science, maths and physics, and you find it absolutely fascinating. But launching into an explanation of how all these things come together is not exactly fodder for charming dinner party repartee. The last time you answered that question honestly, the other guests politely endured a five-minute lecture. You can do better this time. So you offer a short, to-the-point answer: “I work in theoretical physics.”

“Really! But what do you do, exactly?” is the reply. Experience has taught you that the most effective answer involves travelling to conferences in exotic locations. But on this occasion, your subconscious rebels. You find that your brain is filling with concepts such as quantum cellular automata and models of computation, concepts at the core of your work. They are what get you out of bed in the morning. So you blurt out something like “models of quantum computation and the consequences for theoretical physics”. From the look on your companion’s face, you know you’ve messed up again.

QUANTUM COMPUTATION is a new, booming field that exploits the magic of quantum theory to the benefit of computing – from faster computer speeds to more secure data transfer. Discussing its importance for theoretical physics at large may not be a subject that fits neatly into a casual dinner-party conversation, but it is an idea that is increasingly having its day. Some physicists argue that physics should shift away from the study of matter, particles and forces, and instead focus on information. The concept of information is already central to physics, via notions such as entropy (fundamental in thermodynamics, entropy is the study of how energy moves in and out of a system), observers and measurements (central to Einstein’s theories of relativity and to quantum mechanics), and information exchange between systems.

But there is a growing opinion that, as these ideas are embraced, physics will become not only informational, but computational. This idea dates back to the 1970s, and states that the entire universe can be considered to be a giant computer. In this universe-computer, in all places and at all times, particles are treated as patterns of information moving across a vast grid of microprocessors, rather than material bodies colliding and scattering – much as a tennis ball can be thought of as a pattern of pixels moving across your TV screen, rather than a lump of rubber ricocheting off a grassy surface during the Wimbledon final. Digital physicists, for their part, are like characters in a video game who are desperately trying to understand the rules.

A striking result to come out of this 1970s work was Robin Gandy’s argument that the universe could be simulated by a computer with unlimited memory. Gandy was a British mathematician, logician and student of the brilliant Alan Turing, whose work on coding laid the foundation for the invention of the modern computer.

Gandy began his argument by noting that all physicists agreed on a few self-evident principles. One is that the laws of physics remain the same everywhere and at all times: if they didn’t, they wouldn’t deserve to be called laws. The second is that there are causes and there are effects; all events must have their causes in the past, and the causal influence can travel at most at the speed of light. That means information, too, can travel from one system to another no faster than the speed of light.

Finally, and somewhat controversially, Gandy stated that it is reasonable for physicists to believe any finite volume of space can only contain a finite amount of information.

YOU MIGHT BE beginning to see why dinner conversations fail to flourish with the gentle banter of theoretical physicists. But, stay with us here, for now we are getting close to the crux of the matter.

From Gandy’s third principle of finite-density information, it follows that if space were, hypothetically, divided into cubes, each cube could be described by the finite information it contained.

Moreover, Gandy’s second principle says that the state of each cube at a particular point in time, call it t+1, is determined by the state of the neighbouring cubes at time t. In other words, the state of a cube at time t+1 is obtained by applying what information theorists call a ‘local rule’, to the state of its neighbouring cubes at time t. Finally, it follows from Gandy’s first principle that this local rule is the same everywhere and at all times. So, the state of the entire universe at time t+1 can be computed by applying some fixed local rule everywhere in space.

The effect of this argument is to reduce the universe to a type of computer called a cellular automaton. You may have played with a simple cellular automaton before, in the form of the popular computer-based ‘Game of Life’, developed by British mathematician John Conway.

The Game of Life comprises a 2-D grid of cells in which each cell can be either ‘alive’ or ‘dead’. Once you have decided which cells will be alive initially, the state of any given cell at a later time will be determined by that cell’s previous state plus the states of its eight immediate neighbours, according to rules that simulate the effects of underpopulation, overcrowding and reproduction (see ‘Conway’s Game of Life’).

These rules are simple, yet it has been shown that the game is universal, meaning that it can be made to compute any known classical algorithm or set of instructions – in much the same way that simple logic gates and wires of a standard desktop PC do.

But is there any chance that the real universe we see and experience could be reduced to such a simple game?

THE PROBLEM WITH Gandy’s model – and the reason why the original digital physics project was doomed to failure – boils down to one thing: quantum physics. To understand why, let us return to our dinner party at the French restaurant, where the food is getting cold.

Against your better judgement, you launch into an explanation of quantum theory using the knives and forks on the table. You hear yourself saying: “Pick a system that can be one of two things – such as an item of cutlery, which can be either a knife or a fork.” You hold up your knife. “Well, in quantum theory, this piece of cutlery does not have to be one or the other. It can be both at the same time, a state known as superposition.” As you start to explain, you sense that your audience may not be grasping the full implications. You ponder the wisdom of an alternative explanation involving salt and pepper shakers, but before you can begin, your waiter arrives with the dessert menu.

The reason we don’t encounter superpositions of knives and forks on a daily basis is that as soon as you observe a quantum system, or take a measurement, it becomes ‘classical’ again. In classical physics, when you observe something you can have only a limited set of results: up or down, knife or fork, alive or dead. Likewise, in classical computing, the smallest unit of information, known as a bit, can take on one of two values, 0 or 1. Quantum physics throws the rulebook out and allows a superposition of states, a concept described by the famous Schrödinger’s cat thought experiment (see ‘The cat paradox’).

So, our smallest unit of quantum information, called a qubit (quantum bit), can only store a single bit of classical information, a 0 or a 1. In that sense, Gandy’s principle of finite information density remains compatible with quantum theory: we cannot effectively store more than a bit of information within a qubit. However, quantum physics says that before one observes a qubit, it is allowed to be in a superposition of states. Hence, quantum physics no longer allows for the case where each cube of space can be fully described by the finite information stored in it, and this is where Gandy’s argument falls down.

The idea of modelling the universe as a computer was resurrected in some form in the early 1980s by American theoretical physicist Richard Feynman. His idea was born out of frustration at seeing classical computers take weeks to simulate quantum physics experiments that happen faster than a blink of an eye. Intuitively, he felt that the job of simulating quantum systems could be done better by a computer that was itself a quantum system.

Like their classical counterparts, quantum computers consist of circuits. To construct quantum circuitry you need quantum wires, which are analogues of real wires carrying conventional bits (as voltages), except that they carry qubits. Classical computers are made of logic gates, which take in information (bits) and output new information (new bits). Quantum gates process qubits instead.

OVER THE PAST decade or so, experimentalists in many groups around the world have successfully implemented quantum wires and simple qubit gates. The true difficulties lie with precision of more complex qubit gates and with protecting many wires from the environment – remember, if the environment ‘observes’ the quantum wires, they become classical again.

One of us (Pablo Arrighi), along with colleagues, developed a version of Gandy’s hypothesis that accounts for the complexities of quantum mechanics. Instead of the third principle, which says that a finite volume of space contains a finite amount of information, we state that a finite volume of space can only hold a finite number of qubits.

Considering the implications of the three updated principles, we were again able to reduce the universe to a computer, a quantum version of the cellular automaton discussed earlier. A quantum cellular automaton is very much like a classical cellular automaton, except that now the cells of the grid contain qubits (see ‘How to explain the universe’). The evolution from time t to t + 1 involves applying a quantum gate operation to neighbourhoods of cells repeatedly, across space. But, there are, alas, some subtleties to quantum cellular automata that cannot be explained quite so easily in a picture. For example, the cells can now be in a superposition of states, and they can also be ‘spookily’ entangled with any other cell – the state of a cell doesn’t solely depend on those next to it.

There is, of course, a big gap between constructing a ‘toy-model’ quantum cellular automaton and applying the lessons learned from it to the real universe. But if the updated versions of Gandy’s hypotheses hold true, and we can indeed describe the universe as a gigantic quantum cellular automaton, then studying physics becomes a game of attempting to deduce the ‘program’ of the vast quantum computer that we live in.

The conventional approach to deducing the universe’s ‘program’ is, of course, not to use cellular automata or anything like them, but to probe the ‘rules of the game’ with increasingly refined physics experiments, such as those performed using the Large Hadron Collider at the CERN particle physics lab. But perhaps there is an alternative computer science-orientated method, one that attempts to find the rules deductively.

SO, WE COME BACK to what quantum information theorists in physics really do for a living. We can begin this deductive process by discarding rules that are too simple, on the grounds that we live in a complex universe.

Next, we note that all sufficiently complex rules can be made to simulate each other. In other words, if the rule of a particular quantum cellular automaton is complex enough, then it can simulate all other quantum cellular automata, even when the other automata have rules that are horrendously complicated. Analogous to the universal nature of Conway’s Game of Life, a quantum cellular automaton that can perform such a simulation is said to have intrinsic universality. If we can find the simplest, intrinsically universal rule for a quantum cellular automaton, we can use it to find the simplest and most ‘natural’ (closest to what we observe in nature) way of implementing or simulating physical phenomena – like the particles and forces that make up the universe.

Of course, it remains to be seen whether all physical phenomena can be ‘encoded’ using the concepts developed here. Many difficulties lie ahead for those of us who are trying to answer the question of how nature computes itself.

The concepts of quantum cellular automata and intrinsic universality are likely to prove key in finding simple, minimal and universal ‘toy models’ to work with in attempting to answer this question. From a computer-science point of view, reaching this goal will amount to a better understanding of physics.

Yet we are obliged to conclude with a word of caution: these ideas may not be all that helpful in a restaurant conversation. Attempting to explain them may end with the other diners deciding that you are the best person to call the next time their (classical) computer breaks down. But on a more positive note, if we can find the rules, everyone will be a winner in this game of life.


~ Pablo Arrighi and Jonathan Grattage are quantum-information scientists affiliated with the University of Grenoble and ENS de Lyon, France.

Thursday, February 28, 2013

The Richard Feynman Trilogy: The Physicist Captured in Three Films


From Open Culture, here are three films (and a television series) about the brilliant, charmingly eccentric, Nobel Prize winning physicist Richard Feynman. Here is some background on the man from Wikipedia:
Richard Phillips Feynman (May 11, 1918 – February 15, 1988) was an American theoretical physicist known for his work in the path integral formulation of quantum mechanics, the theory of quantum electrodynamics, and the physics of the superfluidity of supercooled liquid helium, as well as in particle physics (he proposed the parton model). For his contributions to the development of quantum electrodynamics, Feynman, jointly with Julian Schwinger and Sin-Itiro Tomonaga, received the Nobel Prize in Physics in 1965. He developed a widely used pictorial representation scheme for the mathematical expressions governing the behavior of subatomic particles, which later became known as Feynman diagrams. During his lifetime, Feynman became one of the best-known scientists in the world. In a 1999 poll of 130 leading physicists worldwide by the British journal Physics World he was ranked as one of the ten greatest physicists of all time.[3]

He assisted in the development of the atomic bomb and was a member of the panel that investigated the Space Shuttle Challenger disaster. In addition to his work in theoretical physics, Feynman has been credited with pioneering the field of quantum computing,[4][5] and introducing the concept of nanotechnology.[6] He held the Richard Chace Tolman professorship in theoretical physics at the California Institute of Technology.

Feynman was a keen popularizer of physics through both books and lectures, notably a 1959 talk on top-down nanotechnology called, There's Plenty of Room at the Bottom, and the three volume publication of his undergraduate lectures, The Feynman Lectures on Physics. Feynman also became known through his semi-autobiographical books, Surely You're Joking, Mr. Feynman!and What Do You Care What Other People Think?, and books written about him, such as Tuva or Bust!.
Enjoy the videos.

The Richard Feynman Trilogy: The Physicist Captured in Three Films


January 6th, 2012



It’s another case of the whole being greater better than the sum of the parts. Between 1981 and 1993, documentary producer Christopher Sykes shot three films and one TV series dedicated to the charismatic, Nobel Prize-winning physicist Richard Feynman (1918-1988). We have presented these documentaries here individually before (some several years ago), but never brought them together. So, prompted by a post on Metafilter, we’re doing just that today.

We start above with The Pleasure of Finding Things Out, a film directed by Sykes in 1981. It features Feynman talking in a very personal way about the joys of scientific discovery, and about how he developed his enthusiasm for science. About the program, Harry Kroto (winner of the Nobel Prize for Chemistry) apparently once said: “The 1981 Feynman [production] is the best science program I have ever seen. This is not just my opinion – it is also the opinion of many of the best scientists that I know who have seen the program. It should be mandatory viewing for all students whether they be science or arts students.”



The Pleasure of Finding Things Out was followed by Fun to Imagine, a Sykes-directed television series that got underway in 1983. Feynman hosted the series and, along the way, used physics to explain how the everyday world works – “why rubber bands are stretchy, why tennis balls can’t bounce forever, and what you’re really seeing when you look in the mirror.” 12 episodes (including the first episode shown above) await you on YouTube. Thanks to Metafilter, you can access them easily right here: 1) Jiggling Atoms, 2) Fire, 3) Rubber Bands, 4) Magnets (and ‘Why?’ questions), 5) Bigger is Electricity!, 6) The Mirror, 7) The Train, 8) Seeing Things, 9) Big Numbers and Stuff (i), 10) Big Numbers and Stuff (ii), 11) Ways of Thinking (i) and 12: Ways of Thinking (ii).



Let’s skip forward to 1989, when PBS’ NOVA aired The Last Journey of a Genius, a television film that documented Feynman’s final days and his longtime obsession with traveling to Tannu Tuva, a state outside of outer Mongolia. For the better part of a decade, Feynman and his friend Ralph Leighton schemed to make their way to Tannu Tuva, but Cold War politics frustrated their efforts. Sykes’ documentary runs roughly 50 minutes and features an ailing Feynman talking about his wanderlust. He died two weeks later, never having made the trip.



Five years after Feynman’s death, Sykes directed the final documentary in his trilogy, No Ordinary Genius. This film traces the professor’s adventures inside and outside of science, using stories and photographs provided by Feynman’s family and close friends. The documentary originally aired on the BBC in 1993, and it appears in our collection of 450 Free Movies Online. Also don’t miss the introductory physics lectures that Feynman presented at Cornell in 1964. You will find them listed in our big collection of 400 Free Courses Online. Just scroll down to the Physics section and enjoy.