Thursday, November 27, 2008

Nanocloth is Never Wet

If you were to soak even your best raincoat underwater for two months it would be wet through at the end of the experience. But a new waterproof material developed by Swiss chemists would be as dry as the day it went in.
Lead researcher Stefan Seeger at the University of Zurich says the fabric, made from polyester fibres coated with millions of tiny silicone filaments, is the most water-repellent clothing-appropriate material ever created.
Drops of water stay as spherical balls on top of the fabric (see image, right) and a sheet of the material need only be tilted by 2 degrees from horizontal for them to roll off like marbles. A jet of water bounces off the fabric without leaving a trace (see second image).
Protective spikes
The secret to this incredible water resistance is the layer of silicone nanofilaments, which are highly chemically hydrophobic. The spiky structure of the 40-nanometre-wide filaments strengthens that effect, to create a coating that prevents water droplets from soaking through the coating to the polyester fibres underneath.
"The combination of the hydrophobic surface chemistry and the nanostructure of the coating results in the super-hydrophobic effect," Seeger explained to New Scientist. "The water comes to rest on the top of the nanofilaments like a fakir sitting on a bed of nails," he says.
A similar combination of water-repelling substances and tiny nanostructures is responsible for many natural examples of extreme water resistance, such as the surface of Lotus leaves.
The silicone nanofilaments also trap a layer of air between them, to create a permanent air layer. Similar layers - known as plastrons - are used by some insects and spiders to breathe underwater.
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi18xd0IE55eQhQ4BNiY6Ma5bH6qAcJV3RBeANI8lyT0L_I5ehbRgAWM4wMakruBkpMw_lCFpSqmQmFngk-idmhj-g0mc5eqquXXjdatCSy-axQ3nsmfiuxPPRMOn9Q7B6dYnVG3SCzTxMX/s400/XXXu0A2BZ_400x300.jpg
Self-cleaning suit
This fine layer of air ensures that water never comes into contact with the polyester fabric. It can be submerged in water for two months and still remain dry to the touch, says Seeger.
In addition, the plastron layer can also reduce drag when moving from water by up to 20% according to preliminary experiments conducted by Seeger. "This could be very interesting for athletic swimwear applications," he suggests, raising the possibility of future swimsuits that never get wet.
The new coating is produced in a one-step process, in which silicone in gas form condenses onto the fibres to form nanofilaments. The coating can also be added to other textiles, including wool, viscose and cotton, although polyester currently gives the best results.
Durable invention
Experiments also showed that the new coating is durable. Unlike some water-resistant coatings, it remains more-or-less intact when the fabric is rubbed vigorously, although it didn't survive an everyday washing machine cycle.
For Steven Bell, director of the Innovative Molecular Materials Group at Queen's University Belfast, it is this durability that represents the really exciting aspect of this work.
"Although the textiles did show some degradation in the mechanical abrasion tests, their performance was very impressive," he says. "The era of self-cleaning clothes may be closer than we think."
Journal reference: Advanced Functional Materials (DOI: 10.1002/adfm.200800755)

When did the Earth turn green?

Photosynthesis - the process by which organisms like plants convert light energy into chemical energy - may not have been around quite as long as previously thought.
That's the conclusion of a study of solidified oil that formed around 2.7 billion years ago in the Pilbara region of Western Australia.
The study, led by Birger Rasmussen of the Curtin University of Technology in Perth, means the planet had to wait another 550 million years for photosynthesis to get going, and that the oldest known eukaryotic (complex) cells are one billion years younger than previously thought.
"A lot of people will revisit their understanding of the late Archaean period in light of these results," says Woodward Fischer of the California Institute of Technology, Pasadena.
Oxygen surge
Photosynthesis converts carbon dioxide and water into carbohydrates and oxygen. The process is the most likely explanation for "the great oxidation" event 2.4 billion years ago, when oxygen in the atmosphere started to build up, paving the way for the evolution of complex life-forms like animals.
The oxygen surge is also considered to be the strongest clue to the timing of the evolution of photosynthesis. But until now it has conflicted with the fossil evidence.
Almost a decade ago, Jochen Brocks, then at the University of Sydney, found minute traces of organic molecules that could only have come from photosynthetic cyanobacteria in Pilbara shale (Science, vol 285, 1033). Other organic molecules were indicative of eukaryotic cells.
At the time, Brocks' analysis dated the so-called "molecular fossils" at 2.7 billion years ago.
Not so old
But the new evidence from the Rasmussen team suggests that what Brocks had found was actually molecular contaminants from a more recent era. Brocks is a co-author of the Rasmussen paper.
"The existing unambiguous fossil evidence for the timing of photosynthesis now moves to 2.15 billion years ago," says Rasmussen, referring to fossilised cyanobacteria that have been found in Canada's Belcher Islands.
Rasmussen, Brocks and their colleagues used a relatively new device called a NanoSIMS ion probe to monitor the types of carbon isotopes in solidified oil - the proposed source of Brocks' organic compounds - in the bits of rock left over from the original study.
"The oil had to have formed in the rock, but its isotopic signature was completely different to that of the microbial fossils, so we concluded that the microbial fossils were more recent contaminants," says Rasmussen.
Not done yet
Other paradoxes remain to be solved, however. Since Brocks' discovery a decade ago, "molecular fossils" of photosynthesis from before 2.4 billion years ago have turned up at other sites.
"We suspect that those studies will turn out to be flawed, too," says Rasmussen.
Journal reference: Nature, DOI: 10.1038/nature07381
Evolution - Learn more about the struggle to survive in our comprehensive special report.

Sunday, November 23, 2008

Opera’s new browser

Announced Tuesday at Comdex Fall 2001, the test, or “beta,” version of Opera 6.0 for Microsoft’s Windows operating system brings Opera up to speed with heavyweight competitors Microsoft and Netscape by allowing people to read Web pages written in non-Roman alphabets, including Chinese and Japanese.

“What we’re seeing is that the international market is getting bigger and bigger,” said Jon S. von Tetzchner, chief executive of the Oslo, Norway-based company. “To an extent, English was the ruling language on the Internet for a very long time, but it’s less so now. What we definitely will be seeing are more and more users from China, from all the Asian countries, and this applies to Eastern Europe as well. We see this as a possibility to get into those markets.”

The new browser version comes as Opera has enjoyed a burst of publicity courtesy of rival Microsoft, which launched a new version of its MSN Web portal last month that briefly locked out non-Microsoft browsers. Although Microsoft’s own Internet Explorer easily accessed MSN pages, other browsers–such as Opera, Mozilla, Amaya and some versions of Netscape–received error messages and recommended that people “upgrade” to Internet Explorer.

Microsoft has since moved to fix the error but not before the gaffe threw a media spotlight on rival browsers.

Industry analysts downplayed the significance of the 6.0 beta release, noting that the company’s bigger ambitions lay in providing browsers to smaller devices than the PC.

Although Netscape’s small browser efforts have stumbled with repeated delays, and Microsoft’s have met with resistance from operating system competitors, Opera has been moving aggressively to establish itself as the browser vender of choice for small devices.

This summer, the U.K.-based mobile software unit of Psion selected Opera as the browser for its handsets. That agreement came shortly after Opera took the wraps off its deal to supply IBM with small browsers. Before that, Opera released a browser for Symbian’s EPOC operating system for next-generation cell phones and other mobile Internet access devices.

“With the PC browsers, Opera is more there to establish a name in the industry. It’s not going to be an important revenue source in the future,” said Jon Mosberg, equity analyst at the Oslo branch of Stockholm, Sweden-based Enskilba Securities. “The greatest potential is in the mobile Internet…Symbian doesn’t want Microsoft to be the supplier of their browser because it could dictate the terms of using the software. That opens up an opportunity for a company that has a browser that runs well on the new devices.”

At the technical heart of Opera’s internationalization effort is its adoption of the Unicode Worldwide Character Set, a widely supported standard for expressing letters and other characters on computers. Opera’s support for Unicode came late because of the challenge of integrating it with Windows 95, Tetzchner said.

In other words
Opera is still hammering out its support for Arabic. Coming “as soon as possible” are browser interfaces written in non-English languages.

Also lagging behind Opera’s new browser for Windows are its counterparts for the Linux and Macintosh operating systems. Opera has yet to finalize its version 5.0 browser for the Mac. But the company promised a 6.0 beta for Linux “fairly quickly.”

Tetzchner said the Opera 6.0 beta was faster, used memory more efficiently, and had incremental improvements in its support for standards promulgated by the World Wide Web Consortium ( W3C).

The company has followed Netscape and Microsoft with new options for displaying windows, including a choice between single and multiple document interfaces, and a persistent bar for bookmarks and search. With the 6.0 beta, people can run multiple copies of Opera simultaneously, preserving different sets of e-mail, bookmarks and other preferences.

E-mail changes include the ability to import e-mail from Microsoft Outlook accounts and support for TLS ( Transport Layer Security) for POP and SMTP accounts. TLS is a security protocol under development by the Internet Engineering Task Force.

The browser also comes with a new default user interface.

In a feature reminiscent of information-gathering applications such as Atomica, Opera 6.0 offers Hotclick, which lets people select a word and pull down a definition or translation without leaving the page. To provide encyclopedia and translation content, Opera has formed a partnership with Terra Lycos. Other partners with Hotclick include search engine Google and e-commerce Web site Amazon.com.

Although Opera has enjoyed a loyal following among the Web cognoscenti, and particularly those with animosity toward Microsoft, the Norwegian browser has lagged far behind in distribution, partly because it persisted in charging for the browser long after Microsoft and Netscape opted to give theirs away.

Nearly a year ago, Opera began offering a free version of the browser that comes with advertising. Ad-free Opera costs $39.

That experiment has proved successful, Tetzchner said.

“Revenues have increased, so it’s working,” he said. “People see that it’s free, so more people use it. Then a number of those people want to get rid of the ads, or they just want to support us to make sure we are around.”

Paying users of Opera 5.x for Windows get a free upgrade to the 6.0 beta. Paid users of Opera 4.x get a discount of about half off the $39 fee.

Opera is planning to release the final 6.0 Windows browser by Christmas.

Nanotechnology

Such a small robot same as a housefly!How is it possible??

Well nanotechnology makes it possible..

I hope that the following information gives you the answer for the latest advancements in the decreasing of product size .

More on these type of technologies.......Click this.

Nanocoatings Boost Industrial Energy Efficiency

Friction is the bane of any machine. When moving parts are subject to friction, it takes more energy to move them, the machine doesn’t operate as efficiently, and the parts have a tendency to wear out over time.

A photograph of the process of coating a substrate (left) with AlMgB14 by pulsed laser deposition. The bright plume in the center of the photograph is an AlMgB14 plasma. The solid target is just to the right of the plume. (Credit: Image courtesy of DOE/Ames Laboratory)

But if you could manufacture parts that had tough, “slippery” surfaces, there’d be less friction, requiring less input energy and the parts would last longer. Researchers at the U.S. Department of Energy’s Ames Laboratory are collaborating with other research labs, universities, and industrial partners to develop just such a coating.

“If you consider a pump, like a water pump or a hydraulic pump, it has a turbine that moves the fluid,” said Bruce Cook, an Ames Laboratory scientist and co-principal investigator on the four-year, $3 million project. “When the rotor spins, there’s friction generated at the contacting surface between the vanes and the housing, or stator. This friction translates into additional torque needed to operate the pump, particularly at start-up. In addition, the friction results in a degradation of the surfaces, which reduces efficiency and the life of the pump. It takes extra energy to get the pump started, and you can’t run it at its optimum (higher speed) efficiency because it would wear out more quickly.”

(Right) A photograph of an AlMgB14 coating on a steel substrate. The substrate is the mottled structure on the left-hand side of the photo and the coating is the thin, darker strip running along the edge of the steel. (The blemishes on the steel are carbide inclusions) The coating has a thickness of approximately 2 to 3 microns (about 1 ten thousandths of an inch)

The coating Cook is investigating is a boron-aluminum-magnesium ceramic alloy he discovered with fellow Ames Laboratory researcher and Iowa State University professor of Materials Science and Engineering Alan Russell about eight years ago. Nicknamed BAM, the material exhibited exceptional hardness, and the research has expanded to include titanium-diboride alloys as well.

In many applications it is far more cost effective to apply the wear-resistant materials as a coating than to manufacture an entire part out of the ceramic. Fortunately, the BAM material is amenable to application as a hard, wear-resistant coating. Working with ISU materials scientist Alan Constant, the team is using a technique called pulsed laser deposition to deposit a thin layer of the alloy on hydraulic pump vanes and tungsten carbide cutting tools. Cook is working with Eaton Corporation, a leading manufacturer of fluid power equipment, using another, more commercial-scale technique known as magnetron sputtering to lay down a wear-resistant coating.

Pumps aren’t the only applications for the boride nanocoatings. The group is also working with Greenleaf Corporation, a leading industrial cutting tool maker, to put a longer lasting coating on cutting tools. If a tool cuts with reduced friction, less applied force is needed, which directly translates to a reduction in the energy required for the machining operation.

To test the coatings, the project team includes Peter J. Blau and Jun Qu at one of the nation’s leading friction and wear research facilities at DOE’s Oak Ridge National Laboratory, or ORNL, in Tennessee. Initial tests show a decrease in friction relative to an uncoated surface of at least an order of magnitude with the AlMgB14-based coating. In preliminary tests, the coating also appears to outperform other coatings such as diamond-like carbon and TiB2.

In a separate, but somewhat related project, Cook is working with researchers from ORNL, Missouri University of Science and Technology, the University of Alberta, and private companies to develop coatings in high-pressure water jet cutting tools and severe service valves where parts are subject to abrasives and other extreme conditions.

“This is a great example of developing advanced materials with a direct correlation to saving energy,” Cook said. “Though the original discovery wasn’t by design, we’ve done a great deal of basic research in trying to figure out the molecular structure of these materials, what gives them these properties and how we can use this information to develop other, similar materials.”

Funding for both projects is provided by the DOE’s Office of Energy Efficiency and Renewable Energy. BAM is licensed to Newtech Ceramics, an Iowa based startup company located in Des Moines. The ISU Research Foundation provided nearly $60,000 in funding for development of material samples for marketing as part of the startup effort.

More here

Quantum Computer

Physicists in the USA and at the London Centre for Nanotechnology have found a way to extend the quantum lifetime of electrons by more than 5,000 per cent, as reported recently in Physical Review Letters. Electrons exhibit a property called ‘spin’ and work like tiny magnets which can point up, down or a quantum superposition of both.

Microwaves are used to control the spin state of electrons held in silicon. This spin state can be watched in real time by measuring the electric current flowing between the (grey) electrodes. (Credit: Image courtesy UCL)

The state of the spin can be used to store information and so by extending their life the research provides a significant step towards building a usable quantum computer.

“Silicon has dominated the computing industry for decades,” says Dr Gavin Morley, lead author of the paper. “The most sensitive way to see the quantum behaviour of electrons held in silicon chips uses electrical currents. Unfortunately, the problem has always been that these currents damage the quantum features under study, degrading their usefulness.”

Marshall Stoneham, Professor of Physics at UCL (University College London), commented: “Getting the answer from a quantum computation isn’t easy. This new work takes us closer to solving the problem by showing how we might read out the state of electron spins in a silicon-based quantum computer.”

To achieve the record quantum lifetime the team used a magnetic field twenty-five times stronger than those used in previous experiments. This powerful field also provided an additional advantage in the quest for practical quantum computing: it put the electron spins into a convenient starting state by aligning them all in one direction.

For more information, see the paper published in Physical Review Letters, November 14 2008, by G. W. Morley (London Center for Nanotechnology), D. R. McCamey (University of Utah), H. A. Seipel (University of Utah), L.-C. Brunel (National High Magnetic field Laboratory), J. van Tol (National High Magnetic field Laboratory) and C. Boehme (University of Utah).

Cold Atoms Could Replace Hot Gallium In Focused Ion Beams

Scientists at the National Institute of Standards and Technology (NIST) have developed a radical new method of focusing a stream of ions into a point as small as one nanometer (one billionth of a meter).

NIST researcher Jabez McClelland makes adjustments on the new magneto-optical trap ion source, capable of focusing beams of ions down to nanometer spots for use as a ‘nano-scalpel’ in advanced electronics processing. (Credit: Holmes, NIST)

Because of the versatility of their approach—it can be used with a wide range of ions tailored to the task at hand—it is expected to have broad application in nanotechnology both for carving smaller features on semiconductors than now are possible and for nondestructive imaging of nanoscale structures with finer resolution than currently possible with electron microscopes.

Researchers and manufacturers routinely use intense, focused beams of ions to carve nanometer-sized features into a wide variety of targets. In principle, ion beams also could produce better images of nanoscale surface features than conventional electron microscopy.

But the current technology for both applications is problematic. In the most widely used method, a metal-coated needle generates a narrowly focused beam of gallium ions. The high energies needed to focus gallium for milling tasks end up burying small amounts in the sample, contaminating the material. And because gallium ions are so heavy (comparatively speaking), if used to collect images they inadvertently damage the sample, blasting away some of its surface while it is being observed. Researchers have tried using other types of ions but were unable to produce the brightness or intensity necessary for the ion beam to cut into most materials.

The NIST team took a completely different approach to generating a focused ion beam that opens up the possibility for use of non-contaminating elements. Instead of starting with a sharp metal point, they generate a small “cloud” of atoms and then combine magnetic fields with laser light to trap and cool these atoms to extremely low temperatures. Another laser is used to ionize the atoms, and the charged particles are accelerated through a small hole to create a small but energetic beam of ions. Researchers have named the groundbreaking device “MOTIS,” for “Magneto-Optical Trap Ion Source.” (For more on MOTs, see “Bon MOT: Innovative Atom Trap Catches Highly Magnetic Atoms,” NIST Tech Beat Apr. 1, 2008.)

“Because the lasers cool the atoms to a very low temperature, they’re not moving around in random directions very much. As a result, when we accelerate them the ions travel in a highly parallel beam, which is necessary for focusing them down to a very small spot,” explains Jabez McClelland of the NIST Center for Nanoscale Science and Technology.

The team was able to measure the tiny spread of the beam and show that it was indeed small enough to allow the beam to be focused to a spot size less than 1 nanometer. The initial demonstration used chromium atoms, establishing that other elements besides gallium can achieve the brightness and intensity to work as a focused ion beam “nano-scalpel.” The same technique, says McClelland, can be used with a wide variety of other atoms, which could be selected for special tasks such as milling nanoscale features without introducing contaminants, or to enhance contrast for ion beam microscopy.

Is nanotechnology a health timebomb?

Nanotechnology-based products are hitting the market without being properly assessed for safety - and that’s a risk too far.

The possible health and environmental effects of buckyballs and other nanostructures are largely unknown

But there are safety rules for all consumer products, aren’t there?

Yes, but because nanomaterials are often made using chemicals like silver and carbon that are considered safe when used on a macro scale, the commission says they are slipping under the regulatory net when used at the nanoscale - without any consideration of the potentially adverse physical or chemical effects their novel nanostructures may have on people, animals, and the environment.

What does the commission want?

The commission is calling for the European Union to extend its regulatory regime for chemicals (REACH) to properly assess nanomaterials and their unique properties.

In the UK, they want the Department of the Environment, Farming and Rural Affairs (DEFRA) to develop and undertake tests on products that contain nanomaterials, and develop gadgets that detect, for instance, nanomaterials like carbon nanotubes when they become airborne.

“We have no means of detecting buckyballs or nanotubes in the environment right now,” says John Lawton, the RCEP’s chairman.

Haven’t we been here before?

Yes. But since 2004 when the Royal Society and Royal Academy of Engineering first said that a programme of research was necessary to ensure the safety of nanotech products, the field has moved on in leaps and bounds.

“The rate of nanotechnology innovation now far outstrips our capacity to respond to the risks,” says Lawton.

The RCEP thinks the arrival of products in our high streets means it’s time to reiterate the need for safety tests - as the earlier call fell on deaf ears in government. It also wants to avoid polarising public opinion, as happened with genetic modification.

How many novel technologies are we talking about?

The number of patents filed on nanomaterials worldwide by 2006 reached 1600 - and that growth has continued exponentially. According to the Project on Emerging Nanotechnologies in Washington, DC, there are at least 600 products on the global market that claim to contain a nanomaterial as a key ingredient, he adds.

What kind of products contain nanomaterials?

Well, the range is broad - and there could be health and environmental problems with any of them. They include sunscreens, cleaning products, anti-odour treatments for clothes, cosmetics, smart plastics, ceramics, self-cleaning glasses, composites, carbon-fibre-based textiles and other products containing nanotubes and buckyballs.

Which ones are causing concern?

All of them, to some extent. But the commission singled out two. “Nanosilver” - a bactericide which slows the formation of odour-forming bugs in clothes like socks, underpants and T-shirts.

The second is a textile comprising spun fibres made from carbon nanotubes that could save the clothing industry a fortune by making clothes that don’t need dyes - their thread diameter dictates their colour through refraction effects.

How might these products cause harm?

“Nanosilver is biocidal to a remarkable extent - it’s extremely toxic to microorganisms,” says Lawton. In fact, it will kill twice the number of bacteria that bleach can.

When flushed into water courses, no-one knows what could happen. It could stop the biochemical reactions that make your local sewage-processing plant work. Or it may damage aquatic life - buckyballs have already been shown to cause brain damage in fish.

There have been reports that the carbon fibres in clothing could produce asbestosis-like lung diseases, and that spilled nanotubes could damage ecosystems.

Why not just ban nanotech products?

The RCEP thinks the advantages to society of nanotechnologies are too great to lose. “On balance there are no grounds for a blanket ban,” says Lawton.

Instead, he simply wants a major increase in the amount of testing to assess risk - prioritising the materials that may present the greatest risk to the environment and human health.

“Research gaps need to be addressed urgently, especially given the long lead times involved in developing and putting in place testing arrangements that will inform regulatory and legislative processes,” he says.

Careers: A fresh start in the Alps

FOR a nation with a history of making complicated clockwork, it is no surprise that Switzerland is top of the heap when it comes to precision, high-tech research. The country boasts two Federal Institutes of Technology, the CERN particle physics laboratory and a major IBM research facility. It is also home to big names in pharmaceuticals such as Roche and Novartis - and who can forget its world-famous chocolate industry?

With British citizens able to work in Switzerland visa-free, annual salaries of up to £72,000 for experienced researchers and the option of skiing in your lunch break, it’s easy to see why Switzerland appeals to so many. So where can you make your mark?

Computing clout

IBM is one of many global companies that have research centres in Switzerland. Its Rüschlikon lab, just south of Zurich, attracts talent from all over the world: 80 per cent of the research staff come from abroad.

The lab is a leader in digital storage technology as well as semiconductor and optical electronics for computer networks. Plans to build a top-class nanotechnology research centre on the site are under way: it is scheduled to be completed in 2011.

The lab recruits from a range of disciplines, including physics, chemistry and maths, says Irene Holenweger Koeb of IBM human resources. It also has a thriving bioscience group working on the application of nanotechnology to the life sciences, among other areas. Most positions require a PhD, though the lab also employs around 100 undergraduates and graduates each year.

Paul Hurley, a researcher in IBM’s systems software group, enjoys the informality of his working environment: IBM encourages a relaxed office culture that includes meetings over lunch or coffee.

With so many of its employees not being Swiss nationals, the company offers ample support to help new employees acclimatise and has a policy of paying relocation expenses. “It’s important to us that new hires settle in easily,” says Koeb.

German lessons, paid for by IBM, bring together employees who are new to Zurich. The standard German taught is different from what Zurich natives speak, so although Hurley has attended the classes, he says it takes a bit more practice to pick up the “Swiss-isms”.

Raising the chocolate bar

Switzerland is known for its chocolate, but being Swiss is not a prerequisite for making it well. “In our company we have 44 nationalities and 18 languages,” says José Rubio of Lindt’s human resources department.

Scientists can find jobs in quality management, research and development and on the factory floor. Those working in R&D help develop new recipes and products, as well as designing and building new machines for making them. However, you might prefer to hone your skills in quality management, where you will have the pleasant task of testing the products to make sure they are up to the company’s high standards.

Foreign staff must speak at least one of the Swiss official tongues, says Rubio. Most positions require a good level of German, particularly important when working with Swiss colleagues on production lines, as many do not understand English.

Lindt draws many of its employees from two major higher-education institutions around Zurich: the Swiss Federal Institute of Technology Zurich (ETH Zurich) and the Institute of Food and Beverage Innovation, part of the Zurich University of Applied Sciences. Enrolling at one of these can give young food scientists an edge in getting a job at Lindt or another Swiss food manufacturer.

The ETH in German-speaking Zurich has a sister institution, the Federal Institute of Technology in French-speaking Lausanne (EPFL). With over 250 research groups and 10,000 students and faculty, it emphasises interdisciplinary scientific research. “We have a strong neurosciences group,” says Mary Parlange of EPFL’s human resources department, who also cites robotics and plasma physics as some of its strengths. The institute’s technology transfer programmes ensure that useful tools and methods make it out of the lab and into industry.

EPFL also builds bridges to other institutions, maintaining close ties with the University of Lausanne and beyond. “We’re one of the leading collaborators at the nuclear facility ITER,” Parlange adds, referring to the fusion laboratory being built in France.

Paul Hurley from IBM became strongly attracted to Switzerland as a student at EPFL. “I was amazed at the salary that I could be offered as a PhD,” he says, adding that students in the UK sometimes have to “fend for themselves” in terms of funding. Jacques Giovanola, head of EPFL’s doctoral school, says that nearly 95 per cent of its PhD students have salaries secured by their supervisors.

Nanotechnology rules, OK!

More than 30 years ago, Richard Feynman amazed physicists with his vision of the future. ‘Consider the final question as to whether, ultimately - in the great future - we can arrange atoms the way we want; the very atoms, all the way down! What would happen if we could arrange atoms one by one the way we want them?’ Feynman was speaking at a meeting of the American Physical Society on 29 December 1959.

What has happened is that scientists have started indulging in microscopic graffiti. The instrument that makes this possible is the scanning tunnelling microscope - invented 11 years ago to produce images of surfaces showing the arrangement of individual atoms. In the past few years scientists have been using the extremely fine tip of the microscope to modify surfaces as well. The temptation to leave their mark in messages only a few atoms high is irresistible. …

DNA dirty tricks loom in future elections

The genetic make-up of a candidate in the next US presidential election could be exploited by an opponent to raise doubts about their health or personality. So say medical researcher Robert Green and medical lawyer George Annas, both at Boston University.

Anyone who wants a sample of a candidate’s DNA could probably get it from coffee cups or cutlery that the person has used, or perhaps even handshakes. Combine that with the fact that a well-funded campaign could now afford to pay for a whole-genome scan, and the divulging of a candidate’s genome becomes a genuine possibility, Green and Annas write in The New England Journal of Medicine (vol 359, p 2192).

Such an act is more likely to aid demagoguery than make reliable predictions, though: at present, little is known about the genetic roots of personality, while most genes associated with a disease only slightly bump up the risk of developing the condition.

“You could say truthfully that candidate A is at elevated risk for disease X, but that might increase his or her risk from 6 per cent to 6.5 per cent,” says Green. “That’s not really very meaningful.”

Unscrupulous opponents could nevertheless try to exploit the idea that the candidate’s “bad genes” make him or her a poor choice, however misleading such a statement might be to those who don’t understand such details.

George Church, a professor of genetics at Harvard Medical School in Boston, sees one possible benefit to set against such dangers. A dust-up over a presidential candidate’s genes could motivate the general public to learn more about genetics, he says.

Journal reference: New England Journal of Medicine, vol 359, p 2192

Tunnelling nanotubes: Life’s secret network

HAD Amin Rustom not messed up, he would not have stumbled upon one of the biggest discoveries in biology of recent times. It all began in 2000, when he saw something strange under his microscope. A very long, thin tube had formed between two of the rat cells that he was studying. It looked like nothing he had ever seen before.

http://www.newscientist.com/data/images/ns/cms/mg20026821.400/mg20026821.400-1_300.jpg

His supervisor, Hans-Hermann Gerdes, asked him to repeat the experiment. Rustom did, and saw nothing unusual. When Gerdes grilled him, Rustom admitted that the first time around he had not followed the standard protocol of swapping the liquid in which the cells were growing between observations. Gerdes made him redo the experiment, mistakes and all, and there they were again: long, delicate connections between cells. This was something new - a previously unknown way in which animal cells can communicate with each other.

Gerdes and Rustom, then at Heidelberg University in Germany, called the connections tunnelling nanotubes. Aware that they might be onto something significant, the duo slogged away to produce convincing evidence and eventually published a landmark paper in 2004 (Science, vol 303, p 1007).

A mere curiosity?

At the time, it was not clear whether these structures were anything more than a curiosity seen only in peculiar circumstances. Since their pioneering paper appeared, however, other groups have started finding nanotubes in all sorts of places, from nerve cells to heart cells. And far from being a mere curiosity, they seem to play a major role in anything from how our immune system responds to attacks to how damaged muscle is repaired after a heart attack.

They can also be hijacked: nanotubes may provide HIV with a network of secret tunnels that allow it to evade the immune system, while some cancers could be using nanotubes to subvert chemotherapy. Simply put, tunnelling nanotubes appear to be everywhere, in sickness and in health. “The field is very hot,” says Gerdes, now at the University of Bergen in Norway.

It has long been known that the interiors of neighbouring plant cells are sometimes directly connected by a network of nanotubular connections called plasmodesmata. However, nothing like them had ever been seen in animals. Animal cells were thought to communicate almost entirely by releasing chemicals that can be detected by receptors on the surface of other cells. This kind of communication can be very specific - nerve cells can extend over a metre to make connections with other cells - but it does not involve direct connections between the interiors of cells.

Quite different

The closest animal equivalents to plasmodesmata were thought to be gap junctions, which are like hollow rivets joining the membranes of adjacent cells. A channel through the middle of each gap junction directly connects the cell interiors, but the channel is very narrow - just 0.5 to 2 nanometres wide - and so only allows ions and small molecules to pass from one cell to another.

Nanotubes are something different. They are 50 to 200 nanometres thick, which is more than wide enough to allow proteins to pass through. What’s more, they can span distances of several cell diameters, wiggling around obstacles to connect the insides of two cells some distance apart. “This gives the organism a new way to communicate very selectively over long range,” says Gerdes.

It is a previously unknown way in which cells can communicate over a distance

Soon after they first saw nanotubes in rat cells, he and Rustom saw them forming between human kidney cells too. Using video microscopy, they watched adjacent cells reach out to each other with antenna-like projections, establish contact and then build the tubular connections. The connections were not just between pairs of cells. Cells can send out several nanotubes, forming an intricate and transient network of linked cells lasting anything from minutes to hours. Using fluorescent proteins, the team also discovered that relatively large cellular structures, or organelles, could move from one cell to another through the nanotubes.

Tube travelling

The first clue to how membrane nanotubes, as some researchers prefer to call them, might be used by cells came from the US. Simon Watkins of the University of Pittsburgh, Pennsylvania, and his colleagues were studying dendritic cells, the sentinels for the immune system. When a dendritic cell detects an invader, it gets ready to sound the alarm. One sign of this activation is a change in calcium levels in the cell.

While Watkins was poking a dendritic cell with a micro-needle filled with bacterial toxins, he noticed a calcium fluctuation in a dendritic cell far away from the one that was touched. “Wow, that’s pretty cool,” thought Watkins. Information about the toxins was somehow being passed from the cell being poked to a distant cell. Nothing in his experience could explain the phenomenon.

When Watkins dived into the literature, he discovered Gerdes’s paper. His team then took another look at the dendritic cells. Sure enough, they found the cells were connected by a network of tunnelling nanotubes.

more on this

New levitation technique floats water with noise

http://www.newscientist.com/data/images/ns/cms/dn16083/dn16083-1_300.jpg

Some musical shake, rattle and roll has led Belgian physicists to develop a new kind of levitation. It only works for tiny drops of liquid, but could provide a new way to handle biological or forensic samples without contaminating them.

Researchers at University of Liège, Belgium, set out to recreate the way water droplets spilt on speakers danced to the bass vibrations. They substituted a vibrating bath of oil for the speakers and released 1-millimetre-wide droplets of a less viscous oil on top.

Air cushion

When the oil in the bath vibrates with the right frequency - about 110 hertz - the droplets bounce and roll freely on an air cushion above it. Short video clips show levitated droplets rolling (2.7 mb .mov format) and bouncing (3.1mb .mov format). Videos courtesy Institute of Physics.

This happens because a droplet has to push the air beneath it out of the way in order to fall. But when the bath is vibrating at the right frequency, it creates a resonating layer of air over its surface that isn’t pushed aside so easily.

The droplets bob around without touching the surface of the bath, at a height of between 150 nanometres and 1.5 centimetres.

“This is the first time this spontaneous rolling motion has been observed in this kind of system,” says Stéphane Dorbolo, a member of the team who undertook the study. Making sure the vibrating bath was ten times more viscous than the droplets being levitated produced the best results. That ensures that the bouncing droplets do not create ripples on the oil’s surface when they push down on the cushion of air holding them aloft.

The team also tried out their technique with water droplets over oil and achieved similar results. “This is a very convenient way to move droplets,” says Tristan Gilet, also at Liège, pointing out that it will also work for liquids such as sweat, tears or chemicals dissolved in water.

Chemistry in a droplet

Bouncing droplets could even be made to collide with others containing laboratory reagents, for example, an acid or a dye. “You introduce another droplet to your fluid under test, they bounce together and eventually coalesce - you have a chemical reaction that occurs in this new big bouncing droplet,” says Gilet.

Dealing with tiny samples usually involves the use of microfluidic chips with minuscule engraved channels. But cleaning them is difficult and it is not always easy to keep liquids separate until they are ready to be mixed. The bouncing droplets offer a way around these problems.

Richard Hill, a physicist at the University of Nottingham, UK, told New Scientist: “This technique will be very useful for manipulating liquids where only very small quantities of the fluid are obtainable, for example, in biosciences or forensics where it is extremely important to avoid contamination of the liquid.”

Dorbolo is quick to point out that the bouncing levitation technique is far from perfected. Future research will include floating multilayered droplets with an oil coat surrounding a watery interior.

Journal reference: New Journal of Physics (DOI: 10.1088/1367-2630/10/11/113021)

‘Interplanetary internet’ passes first test

NASA has finished its first deep-space test of what could become an ‘interplanetary internet’. The new networking commands could one day be used to automatically relay information between Earth, spacecraft, and astronauts, without the need for humans to schedule transmissions at each point.

http://www.newscientist.com/data/images/ns/cms/dn16086/dn16086-1_300.jpg

Spacecraft usually communicate directly with Earth - the first to do so through an intermediary were the Mars Explorations Rovers, which launched in 2003. The Spirit and Opportunity rovers transmit data to orbiters, which then send the data back to Earth.

But human intervention is still required to schedule communications sessions for orbiters and landers. “The traditional method of operations is largely manual,” says Jay Wyatt of NASA’s Jet Propulsion Laboratory in Pasadena, California. “People get in a room and decide when they can send data.”

A new method would automate and streamline this process by sending data through an interplanetary ‘internet’. Just as data is sent from one point to another on the internet via a linked network of hubs, or nodes, spacecraft scattered throughout the solar system could be used as nodes to transmit data through space.

Last week, NASA completed a month-long test of a simulated network of Mars landers, orbiters and mission operations centres on Earth.

For the test, dozens of images of Mars and its moon Phobos were transmitted back and forth between computers on Earth and NASA’s Deep Impact spacecraft. The craft, which sent an impactor into Comet Tempel 1 in 2005, has been renamed “Epoxi” now that it its mission has been extended to search for extrasolar planets.

Internet pioneer

Also transmitted were a four-node diagram of the internet’s ancestor, ARPANET, and a photograph of networking visionary J C R Licklider.

The test was the culmination of a collaboration between internet pioneer Vinton Cerf and NASA that began in 1999.

The new protocol is somewhat different from the one that forms the backbone of the internet, called TCP/IP. On Earth, if some data is lost between a sender and a recipient, the two communicate back and forth until all the information is sent.

That ‘handshake’ works well on Earth, where the network is almost always continuously connected, says Adrian Hooke, team leader at NASA Headquarters in Washington, DC.

But in space, probes pass behind planets and out of range, power outages are common, and distances between planets vary as the planets move in their orbits. In addition, at distances not far beyond the Moon, the time required to beam data between a sender and a recipient makes back-and-forth communication between the two inefficient, says Hooke.

Space hackers

To avoid such issues, the new protocol, called Disruption- or Delay-Tolerant Networking (DTN), commands each node in the network to store information until it can find another node that can receive the information.

Data is relayed in a chain and should only need to be transmitted once. “The nodes themselves can take care of making sure the data moves progressively from the source to its destination,” Hooke told New Scientist.

To guard against hackers, the data transmitted over DTN is encrypted. In order to transmit or accept data, a node must identify itself to its companion, a concept called ‘mutual suspicion.’

International network

On Earth, DTN has been tested in a variety of projects - from boosting cellular connections in remote locations and improving battlefield communications to using snowmobiles to extend internet access to reindeer herders.

Hooke hopes to incorporate the protocol on upcoming space missions, beginning with robotic missions to the Moon. “The goal is by the end of 2011 to have these protocols ready to go out of the box, so we can give them to project managers to load onto spacecraft,” Hooke says.

The team is also working to get the protocol accepted by the international community, so that other spacecraft could join the network.

Spacecraft communicating through DTN could also alleviate traffic on NASA’s Deep Space Network, a collection of ground-based radio antennas used to communicate with space probes. Some say the network will soon have trouble meeting demands on its time.

The DTN protocol has been erased from Epoxi, one of the conditions for use of the spacecraft, Hooke says. But the team plans to set up a permanent DTN node at the International Space Station. The protocol will be uploaded to a payload aboard the station in mid-2009.

DNA strands become fibre optic cables

http://www.newscientist.com/data/images/ns/cms/dn16029/dn16029-1_300.png

Thanks to a new technique, DNA strands can be easily converted into tiny fibre optic cables that guide light along their length. Optical fibres made this way could be important in optical computers, which use light rather than electricity to perform calculations, or in artificial photosynthesis systems that may replace today’s solar panels.

Both kinds of device need small-scale light-carrying “wires” that pipe photons to where they are needed. Now Bo Albinsson and his colleagues at Chalmers University of Technology in Gothenburg, Sweden, have worked out how to make them. The wires build themselves from a mixture of DNA and molecules called chromophores that can absorb and pass on light.

The result is similar to natural photonic wires found inside organisms like algae, where they are used to transport photons to parts of a cell where their energy can be tapped. In these wires, chromophores are lined up in chains to channel photons.

Light wire

Albinsson’s team used a single type of chromophore called YO as their energy mediator. It has a strong affinity for DNA molecules and readily wedges itself between the “rungs” of bases that make up a DNA strand. The result is strands of DNA with YO chromophores along their length, transforming the strands into photonic wires just a few nanometres in diameter and 20 nanometres long. That’s the right scale to function as interconnects in microchips, says Albinsson.

To prove this was happening, the team made DNA strands with an “input” molecule on one end to absorb light, and on the other end a molecule that emits light when it receives it from a neighbouring molecule. When the team shone UV light on a collection of the DNA strands after they had been treated with YO, the finished wires transmitted around 30% of the light received by the input molecule along to the emitting molecule.

Physicists have corralled chromophores for their own purposes in the past, but had to use a “tedious” and complex technique that chemically attaches them to a DNA scaffold, says Niek van Hulst, at the Institute of Photonic Sciences in Barcelona, Spain, who was not involved in the work.

The Gothenburg group’s ready-mix approach gets comparable results, says Albinsson. Because his wires assemble themselves, he says they are better than wires made by the previous chemical method as they can self-repair: if a chromophore is damaged and falls free of the DNA strand, another will readily take its place. It should be possible to transfer information along the strands encoded in pulses of light, he told New Scientist.

Variable results

Philip Tinnefeld at the Ludwig Maximilian University of Munich in Germany says a price has been paid for the added simplicity.

Because the wire is self-assembled, he says, it’s not clear exactly where the chromophores lie along the DNA strand. They are unlikely to be spread out evenly and the variation between strands could be large.

Van Hulst agrees and is investigating whether synthetic molecules made from scratch can be more efficient than modified DNA.

But both researchers think that with improvements, “molecular photonics” could have a wide range of applications, from photonic circuitry in molecular computers to light harvesting in artificial photosynthetic systems.

Journal reference: Journal of the American Chemical Society (DOI: 10.1021/ja803407t)

Diamond layers replacing KIDNEYS

http://www.newscientist.com/data/images/ns/cms/dn16080/dn16080-1_300.jpg

Kidney failure currently affects around 400,000 people in the US and countless others around the world. And even for people who can access it, plugging into a dialysis machine is a far from an ideal solution.

As well as forcing people to structure their lives around the process, dialysis is not as efficient as a real kidney at removing toxic chemicals from the blood, while leaving important biomolecules untouched.

A new kind of filter can avoid those problems, though, and be small enough to implant inside the body, a new patent application claims.

Existing dialysis filters have particular problems screening out medium-sized proteins such as β2-microglobulin, which is produced by the immune system and toxic if it builds up. The problem is that larger proteins block the filters designed to deal with those mid-size compounds.

Now William Fissell at the Cleveland Clinic in Ohio and colleagues at the University of Michigan have developed a filter made from a series of diamond layers drilled with successively smaller microscopic holes.

Each layer only allows molecules below a certain size to pass through. And an electric field keeps away larger proteins that would otherwise clog its pores.

This makes the filter more effective at removing toxic molecules from the blood stream than conventional membranes. What’s more, Fissell says, the diamond device is small enough to be implanted into the body and works at ordinary blood pressures.

Read the full bioartificial kidney patent application

Since the 1970s, New Scientist has run a column uncovering the most exciting, bizarre or even terrifying new patented ideas - find the latest stories in our continually updated topic guide.

Read past Inventions: Healing accelerator, Bespoke spinal splints, Excrement antibiotic, Self-replicating materials, Treatment for fragile X, Dust buster, Birthing computer, Eyeball stiffener, Hurricane pacifier, Natural colour underwater photos, and Billboards that know you at a touch.

Unlimited clean energy

http://www.newscientist.com/data/images/ns/cms/mg20026836.000/mg20026836.000-1_300.jpg

FOR a company whose business is rocket science Lockheed Martin has been paying unusual attention to plumbing of late. The aerospace giant has kept its engineers occupied for the past 12 months poring over designs for what amounts to a very long fibreglass pipe.

It is, of course, no ordinary pipe but an integral part of the technology behind Ocean Thermal Energy Conversion (OTEC), a clean, renewable energy source that has the potential to free many economies from their dependence on oil.

“This has the potential to become the biggest source of renewable energy in the world,” says Robert Cohen, who headed the US federal ocean thermal energy programme in the early 1970s.

This has the potential to become the biggest source of renewable energy in the world

As the price of fossil fuels soars, private companies from Hawaii to Japan are racing to build commercial OTEC plants. The trick is to exploit the difference in temperature between seawater near the surface and deep down (see diagram).

First, warm surface water heats a fluid with a low boiling point, such as ammonia or a mixture of ammonia and water. When this “working fluid” boils, the resulting gas creates enough pressure to drive a turbine that generates power. The gas is then cooled by passing it through cold water pumped up from the ocean depths via massive fibreglass tubes, perhaps 1000 metres long and 27 metres in diameter, that suck up cold water at a rate of 1000 tonnes per second. While the gas condenses back into a liquid that can be used again, the water is returned to the deep ocean. “It’s just like a conventional power plant where you burn a fuel like coal to create steam,” says Cohen.

The idea of tapping the ocean’s different thermal layers to generate electricity was first proposed in 1881 by French physicist Jacques d’Arsonval but didn’t receive much attention until the world oil crises of the 1970s. In 1979, a US government-backed partnership that included Lockheed Martin, lowered a cold water pipe from a barge off Hawaii that was part of an OTEC system generating 50 kilowatts of electricity. Two years later, a Japanese group built a pilot plant off the South Pacific island of Nauru capable of generating 120 kilowatts.

In the first flush of success, the US Department of Energy began planning a 40 megawatt test plant off Hawaii. Then in 1981, the funding for ocean thermal technologies began to dwindle. It dried up altogether in 1995 when the price of oil began to drop, eventually falling below $20 a barrel.

Now rising fuel costs have revived interest in this neglected technology. In September, the Department of Energy awarded its first grant for ocean thermal energy in more than a decade, giving Lockheed Martin $600,000 to develop a new generation of cold water pipes.

Cohen believes this could eventually lead to 500 MW OTEC plants on floating offshore platforms sending electricity to onshore grids via submarine cables, and factory ships “grazing” the open ocean for power.

Lockheed’s first goal is to get a test facility up and running. The company has got together with Makai Ocean Engineering of Waimanalo, Hawaii, to build a 10 to 20 MW plant, most likely off Hawaii, that it hopes to have up and running in the next four to six years. The plant - including a 1000-metre pipe some 4 metres in diameter - would feed electricity to the island’s energy grid via submarine cables.

While Lockheed gears up for its test facility, a plant for the US military could come online even sooner. OCEES International, based in Honolulu, is finishing designs for an ocean thermal facility to be built off the island of Diego Garcia in the Indian Ocean, which is home to a major US military base.

The plant would provide 8 MW of electricity and would also power the desalination of 1.25 million gallons of seawater per day. OCEES says it could be up and running by the end of 2011.

At the moment Diego Garcia is powered entirely by diesel fuel, and base commanders see ocean thermal as a means to energy independence. “It’s a strategic military installation in the middle of the Indian Ocean,” says Harry Jackson of OCEES. “They don’t want to rely on others to provide their power.”

“I think OTEC has the potential to develop sufficient power output much quicker than wave buoys or tidal power would,” says Bill Tayler, director of the US navy’s Shore Energy Office. “It would take a lot of buoys to produce 8 to 10 MW of power. We’re looking at them all but have our hopes on OTEC.”

Still, both teams will have to work out issues such as how to connect the floating, bobbing platforms to fixed submarine power lines. Heat exchangers will have to be designed in a way that prevents excessive buildup of algae, barnacles and other marine organisms that could clog the system.

more on this

Six ways to build robots that do humans no harm

http://www.newscientist.com/data/images/ns/cms/dn16068/dn16068-1_300.jpg

With the relentless march of technological progress, robots and other automated systems are getting ever smarter. At the same time they are also being given greater responsibilities, driving cars, helping with childcareMovie Camera, carrying weapons, and maybe soon even pulling the trigger.

But should they be trusted to take on such tasks, and how can we be sure that they never take a decision that could cause unintended harm?

The latest contribution to the growing debate over the challenges posed by increasingly powerful and independent robots is the book Moral Machines: Teaching Robots Right from Wrong.

Authors Wendell Wallach, an ethicist at Yale University, and historian and philosopher of cognitive science Colin Allen, at Indiana University, argue that we need to work out how to make robots into responsible and moral machines. It is just a matter of time until a computer or robot takes a decision that will cause a human disaster, they say.

So are there things we can do to minimise the risks? Wallach and Allen take a look at six strategies that could reduce the danger from our own high-tech creations.

Keep them in low-risk situations

Make sure that all computers and robots never have to make a decision where the consequences can not be predicted in advance.

Likelihood of success: Extremely low. Engineers are already building computers and robotic systems whose actions they cannot always predict.

Consumers, industry, and government demand technologies that perform a wide array of tasks, and businesses will expand the products they offer in order to capitalise on this demand. In order to implement this strategy, it would be necessary to arrest further development of computers and robots immediately.

Do not give them weapons

Likelihood of success: Too late. Semi-autonomous robotic weapons systems, including cruise missiles and Predator drones, already exist. A few machine-gun-toting robots were sent to Iraq and photographed on a battlefield, though apparently were not deployed.

However, military planners are very interested in the development of robotic soldiers, and see them as a means of reducing deaths of human soldiers during warfare.

While it is too late to stop the building of robot weapons, it may not be too late to restrict which weapons they carry, or the situations in which the weapons can be used.

Give them rules like Asimov’s ‘Three Laws of Robotics’

Likelihood of success: Moderate. Isaac Asimov’s famous rules are arranged hierarchically: most importantly robots should not harm humans or through inaction allow harm to them, of secondary importance is that they obey humans, while robotic self-preservation is the lowest priority.

However, Asimov was writing fiction, not building robots. In story after story he illustrates problems that would arise with even these simple rules, such as what the robot should do when orders from two people conflict.

Asimov’s rules task robots with some difficult judgements. For example, how could a robot know that a human surgeon cutting into a patient was trying to help them? Asimov’s robot stories in fact quite clearly demonstrate the limits of any rule-based morality. Nevertheless, rules can successfully restrict the behaviour of robots that function within very limited contexts.

Program robots with principles

Building robots motivated to create the “greatest good for the greatest number”, or to “treat others as you would wish to be treated” would be safer than laying down simplistic rules.

Likelihood of success: Moderate. Recognising the limits of rules, some ethicists look for an over-riding principle that can be used to evaluate all courses of action.

But the history of ethics is a long debate over the value and limits of many proposed single principles. For example, it could seem logical to sacrifice the lives of one person to save the lives of five people. But a human doctor would not sacrifice a healthy person simply to supply organs to five people needing transplants. Would a robot?

Sometimes identifying the best option under a given rule can be extremely difficult. For example, determining which course of action leads to the greatest good would require a tremendous amount of knowledge, and an understanding of the effects of actions in the world. Making such calculations would require time and a great deal of computing power.

Educate robots like children

Machines that learn as they “grow up” could develop sensitivity to the actions that people consider to be right and wrong.

Likelihood of success: Promising, although this strategy requires a few technological breakthroughs. While researchers have created robots able to learn in similar ways to humansMovie Camera, the tools presently available are very limited.

Make machines master emotion

Human-like faculties such as empathy, emotions, and the capacity to read non-verbal social cues should give robots much greater ability to interact with humans. Work has already started on equipping domestic robots with such faculties.

Likelihood of success: Developing emotionally sensitive robots would certainly help implement the previous three solutions discussed. Most of the information we use to make choices and cooperate with others derives from our emotions, as well as our capacity to read gestures and intentions and imagine things from another person’s point of view.

Ancient grave reveals ‘Flintstone’ nuclear family

A Stone Age massacre has provided evidence of the earliest known nuclear family. The evidence also suggests that, just like today, some early humans lived in blended families.

Archaeologists have long suspected that people lived in nuclear families at least as far back as the Stone Age. The idea even has a foothold in popular culture - remember Fred, Wilma and Pebbles Flintstone?

http://www.newscientist.com/data/images/ns/cms/dn16054/dn16054-1_300.jpg

But the evidence for Stone Age nuclear families has been flimsy, mainly based on extrapolations from how we live now, and speculations about relationships between adults and children found buried together.

“We have been inferring the past from the present, but it wasn’t necessarily true. Now, we have tested the hypothesis and found that at least one Stone Age nuclear family existed,” says Wolfgang Haak who led a team at Johannes Gutenberg University in Mainz.

The new evidence comes from a detailed analysis of the remains of 13 people, buried in four gravesites in Eulau in Germany, dating to the later Stone Age, 4600 years ago.

Family ties

All the signs are that these people died violent deaths - one female had an arrowhead embedded in her spine, and the head and forearms of several other adults and children had stone-axe marks.

Examination of the skeletons found that the adults were aged between 25 to 60 years old - old for that period - and the children younger than 13 years old. Several of the adults had partially healed injuries.

“These were the old and the injured, children and women. Whatever violence happened that day, they were not capable of fighting,” says Haak, now at the Australian Centre for Ancient DNA at Adelaide University.

Analysis of DNA in the bones and teeth of the remains in one grave found that an adult male and female, and two boys, were the classic nuclear family.

“The two kids have her mitochondrial DNA, and his Y chromosome - that’s a nuclear family,” says molecular anthropologist Brian Kemp of Washington State University in Pullman.

Arm in arm

Fossil evidence of our ancestors in the middle Stone Age is rare, and Haak points out that it is still unknown when nuclear families became common.

Prehistorically, people would have died young in childbirth or from disease, potentially making nuclear families unsustainable. Indeed, in a second grave, the one adult, a woman, was not related to two children, who were a brother and a sister. The team was not able to extract DNA from the remains of a third child, an infant facing the woman.

“The fact that they all ended up in the same grave, makes us think that they had some relationship in life,” says Haas.

The new findings also suggest an explanation for anomalies in how later Stone Age people in central Europe were buried. Typically, males of all ages rest on their right side, facing south, and women on their left side facing south, but sometimes there are exceptions to this rule.

This was the case with the nuclear family, where each child faced north, towards one of the adults with whom their hands entwined. Haak speculates the arrangement may symbolise blood ties.

Journal reference: Proceedings of the National Academy of Sciences (DOI: 10.1073_pnas.0807592105)

Very high speed processors

32-core CPUs from Intel and AMD

32-core CPUs from Intel and AMD

If your CPU has only a single core, it’s officially a dinosaur. In fact, quad-core computing is now commonplace; you can even get laptop computers with four cores today. But we’re really just at the beginning of the core wars: Leadership in the CPU market will soon be decided by who has the most cores, not who has the fastest clock speed.

What is it? With the gigahertz race largely abandoned, both AMD and Intel are trying to pack more cores onto a die in order to continue to improve processing power and aid with multitasking operations. Miniaturizing chips further will be key to fitting these cores and other components into a limited space. Intel will roll out 32-nanometer processors (down from today’s 45nm chips) in 2009.

When is it coming? Intel has been very good about sticking to its road map. A six-core CPU based on the Itanium design should be out imminently, when Intel then shifts focus to a brand-new architecture called Nehalem, to be marketed as Core i7. Core i7 will feature up to eight cores, with eight-core systems available in 2009 or 2010. (And an eight-core AMD project called Montreal is reportedly on tap for 2009.)

After that, the timeline gets fuzzy. Intel reportedly canceled a 32-core project called Keifer, slated for 2010, possibly because of its complexity (the company won’t confirm this, though). That many cores requires a new way of dealing with memory; apparently you can’t have 32 brains pulling out of one central pool of RAM. But we still expect cores to proliferate when the kinks are ironed out: 16 cores by 2011 or 2012 is plausible (when transistors are predicted to drop again in size to 22nm), with 32 cores by 2013 or 2014 easily within reach. Intel says “hundreds” of cores may come even farther down the line.

Stand alone graphic boards

When AMD purchased graphics card maker ATI, most industry observers assumed that the combined company would start working on a CPU-GPU fusion. That work is further along than you may think.

What is it? While GPUs get tons of attention, discrete graphics boards are a comparative rarity among PC owners, as 75 percent of laptop users stick with good old integrated graphics, according to Mercury Research. Among the reasons: the extra cost of a discrete graphics card, the hassle of installing one, and its drain on the battery. Putting graphics functions right on the CPU eliminates all three issues.

Chip makers expect the performance of such on-die GPUs to fall somewhere between that of today’s integrated graphics and stand-alone graphics boards — but eventually, experts believe, their performance could catch up and make discrete graphics obsolete. One potential idea is to devote, say, 4 cores in a 16-core CPU to graphics processing, which could make for blistering gaming experiences.

When is it coming? Intel’s soon-to-come Nehalem chip includes graphics processing within the chip package, but off of the actual CPU die. AMD’s Swift (aka the Shrike platform), the first product in its Fusion line, reportedly takes the same design approach, and is also currently on tap for 2009.

Putting the GPU directly on the same die as the CPU presents challenges — heat being a major one — but that doesn’t mean those issues won’t be worked out. Intel’s two Nehalem follow-ups, Auburndale and Havendale, both slated for late 2009, may be the first chips to put a GPU and a CPU on one die, but the company isn’t saying yet.

USB 3.0 speeds up performance

The USB connector has been one of the greatest success stories in the history of computing, with more than 2 billion USB-connected devices sold to date. But in an age of terabyte hard drives, the once-cool throughput of 480 megabits per second that a USB 2.0 device can realistically provide just doesn’t cut it any longer.

What is it? USB 3.0 (aka “SuperSpeed USB”) promises to increase performance by a factor of 10, pushing the theoretical maximum throughput of the connector all the way up to 4.8 gigabits per second, or processing roughly the equivalent of an entire CD-R disc every second. USB 3.0 devices will use a slightly different connector, but USB 3.0 ports are expected to be backward-compatible with current USB plugs, and vice versa. USB 3.0 should also greatly enhance the power efficiency of USB devices, while increasing the juice (nearly one full amp, up from 0.1 amps) available to them. That means faster charging times for your iPod — and probably even more bizarre USB-connected gear like the toy rocket launchers and beverage coolers that have been festooning people’s desks.

When is it coming? The USB 3.0 spec is nearly finished, with consumer gear now predicted to come in 2010. Meanwhile, a host of competing high-speed plugs — DisplayPort, eSATA, and HDMI — will soon become commonplace on PCs, driven largely by the onset of high-def video. Even FireWire is looking at an imminent upgrade of up to 3.2 gbps performance. The port proliferation may make for a baffling landscape on the back of a new PC, but you will at least have plenty of high-performance options for hooking up peripherals.

Wireless power transmission

Wireless power transmission has been a dream since the days when Nikola Tesla imagined a world studded with enormous Tesla coils. But aside from advances in recharging electric toothbrushes, wireless power has so far failed to make significant inroads into consumer-level gear.

What is it? This summer, Intel researchers demonstrated a method — based on MIT research — for throwing electricity a distance of a few feet, without wires and without any dangers to bystanders (well, none that they know about yet). Intel calls the technology a “wireless resonant energy link,” and it works by sending a specific, 10-MHz signal through a coil of wire; a similar, nearby coil of wire resonates in tune with the frequency, causing electrons to flow through that coil too. Though the design is primitive, it can light up a 60-watt bulb with 70 percent efficiency.

When is it coming? Numerous obstacles remain, the first of which is that the Intel project uses alternating current. To charge gadgets, we’d have to see a direct-current version, and the size of the apparatus would have to be considerably smaller. Numerous regulatory hurdles would likely have to be cleared in commercializing such a system, and it would have to be thoroughly vetted for safety concerns.

Assuming those all go reasonably well, such receiving circuitry could be integrated into the back of your laptop screen in roughly the next six to eight years. It would then be a simple matter for your local airport or even Starbucks to embed the companion power transmitters right into the walls so you can get a quick charge without ever opening up your laptop bag.

64-bit computing...

In 1986, Intel introduced its first 32-bit CPU. It wasn’t until 1993 that the first fully 32-bit Windows OS — Windows NT 3.1 — followed, officially ending the 16-bit era. Now 64-bit processors have become the norm in desktops and notebooks, though Microsoft still won’t commit to an all-64-bit Windows. But it can’t live in the 32-bit world forever.

What is it? 64-bit versions of Windows have been around since Windows XP, and 64-bit CPUs have been with us even longer. In fact, virtually every computer sold today has a 64-bit processor under the hood. At some point Microsoft will have to jettison 32-bit altogether, as it did with 16-bit when it launched Windows NT, if it wants to induce consumers (and third-party hardware and software developers) to upgrade. That isn’t likely with Windows 7: The upcoming OS is already being demoed in 32-bit and 64-bit versions. But limitations in 32-bit’s addressing structure will eventually force everyone’s hand; it’s already a problem for 32-bit Vista users, who have found that the OS won’t access more than about 3GB of RAM because it simply doesn’t have the bits to access additional memory.

When is it coming? Expect to see the shift toward 64-bit accelerate with Windows 7; Microsoft will likely switch over to 64-bit exclusively with Windows 8. That’ll be 2013 at the earliest. Meanwhile, Mac OS X Leopard is already 64-bit, and some hardware manufacturers are currently trying to transition customers to 64-bit versions of Windows (Samsung says it will push its entire PC line to 64-bit in early 2009). And what about 128-bit computing, which would represent the next big jump? Let’s tackle one sea change at a time — and prepare for that move around 2025.

Google’s desktop OS

In case you haven’t noticed, Google now has its well-funded mitts on just about every aspect of computing. From Web browsers to cell phones, soon you’ll be able to spend all day in the Googleverse and never have to leave. Will Google make the jump to building its own PC operating system next?

What is it? It’s everything, or so it seems. Google Checkout provides an alternative to PayPal. Street View is well on its way to taking a picture of every house on every street in the United States. And the fun is just starting: Google’s early-beta Chrome browser earned a 1 percent market share in the first 24 hours of its existence. Android, Google’s cell phone operating system, is hitting handsets as you read this, becoming the first credible challenger to the iPhone among sophisticated customers.

When is it coming? Though Google seems to have covered everything, many observers believe that logically it will next attempt to attack one very big part of the software market: the operating system.

The Chrome browser is the first toe Google has dipped into these waters. While a browser is how users interact with most of Google’s products, making the underlying operating system somewhat irrelevant, Chrome nevertheless needs an OS to operate.

To make Microsoft irrelevant, though, Google would have to work its way through a minefield of device drivers, and even then the result wouldn’t be a good solution for people who have specialized application needs, particularly most business users. But a simple Google OS — perhaps one that’s basically a customized Linux distribution — combined with cheap hardware could be something that changes the PC landscape in ways that smaller players who have toyed with open-source OSs so far haven’t been quite able to do.

Check back in 2011, and take a look at the not-affiliated-with-Google gOS, thinkgos in the meantime.

Gesture-based remote control

Gesture-based remote control

We love our mice, really we do. Sometimes, however, such as when we’re sitting on the couch watching a DVD on a laptop, or when we’re working across the room from an MP3-playing PC, it just isn’t convenient to drag a hockey puck and click on what we want. Attempts to replace the venerable mouse — whether with voice recognition or brain-wave scanners — have invariably failed. But an alternative is emerging.

What is it? Compared with the intricacies of voice recognition, gesture recognition is a fairly simple idea that is only now making its way into consumer electronics. The idea is to employ a camera (such as a laptop’s Webcam) to watch the user and react to the person’s hand signals. Holding your palm out flat would indicate “stop,” for example, if you’re playing a movie or a song. And waving a fist around in the air could double as a pointing system: You would just move your fist to the right to move the pointer right, and so on.

When is it coming? Gesture recognition systems are creeping onto the market now. Toshiba, a pioneer in this market, has at least one product out that supports an early version of the technology: the Qosmio G55 laptop, which can recognize gestures to control multimedia playback. The company is also experimenting with a TV version of the technology, which would watch for hand signals via a small camera atop the set. Based on my tests, though, the accuracy of these systems still needs a lot of work.

Gesture recognition is a neat way to pause the DVD on your laptop, but it probably remains a way off from being sophisticated enough for broad adoption. All the same, its successful development would excite tons of interest from the “can’t find the remote” crowd. Expect to see gesture recognition technology make some great strides over the next few years, with inroads into mainstream markets by 2012.

Download

website hit counters
Provided by website hit counters website.
Twitter Delicious Facebook Digg Stumbleupon Favorites More

 
Powered by Blogger