Friday, December 30, 2011

New group paves way for alternative 2012 choice

By Alan Silverleib, CNN

Washington (CNN) -- Are you feeling uninspired this election season? Are you sick of all the attention being slathered on a small group of die-hard partisans in Iowa and New Hampshire? Do you think the political system's broken and your voice is ignored?
If you're looking for a change from the usual left-right, liberal-conservative, Democrat-Republican dynamic, you may get your wish. There's a new group in the 2012 election, and it's aiming to redefine presidential politics by going around the major party machines and putting an alternative choice on the ballot in all 50 states and the District of Columbia.
Americans Elect, which has raised $22 million so far, is harnessing the power of the Internet to conduct an unprecedented national online primary next spring. If all goes according to plan, the result will be a credible, nonpartisan ticket that pushes alternative centrist solutions to the growing problems America's current political leadership seems unwilling or unable to tackle.
The theory: If you break the stranglehold that more ideologically extreme primary voters and established interests currently have over presidential nominations, you will push Washington to seriously address tough economic and other issues. Even if the group's ticket doesn't win, its impact will force Democrats and Republicans in the nation's capital to start bridging their cavernous ideological divide.
"We're not a third party. We're a second nominating process trying to create a ticket that is solutions-based, that will force the conversation to the center rather than keeping it at the extremes of either party," says Ileana Wachtel, a spokeswoman for the group.
If you think Americans Elect is nothing more than a bunch of naïve dreamers, think again. Its leadership includes former New Jersey GOP Gov. Christine Todd Whitman; former Clinton administration strategist Doug Schoen; former National Intelligence Director Adm. Dennis Blair; former FBI and CIA Director William Webster; and former U.S. Trade Representative Carla Hills, among others.
The group's CEO is Kahlil Byrd, former communications director for Massachusetts Gov. Deval Patrick, a Democrat. Dan Winslow, a Massachusetts Republican state representative and a former chief counsel to GOP presidential contender Mitt Romney, is also on board.

Funding for the effort was kicked off with over $5 million from investment banker Peter Ackerman. Financially, the ultimate goal is to limit each contributor's donation to no more than $10,000.
Americans Elect strategists believe they'll need around $35 million in total, half of which will likely be necessary to meet cumbersome ballot access requirements.
"The people who provided the seed money to get us started come from across the political spectrum," the group claims on its website. "Giving to Americans Elect buys you no special influence whatsoever, and all donors acknowledge that fact when they contribute."
One point of contention is that the group does not disclose the names of its donors, citing its nonprofit status and fears that contributors could find themselves losing potential business or social contacts. Critics contend the secrecy undermines the organization's claims of openness and transparency, and they argue that any group with such a clear electoral goal should not be exempt from disclosure rules governing the Democratic and Republican national committees.
Any registered voter -- Democrat, Republican, or otherwise -- can become an Americans Elect online delegate. Over 300,000 people have signed up so far. While anyone can seek the group's nomination, possible candidates will have to answer multiple online questionnaires.
Six prospective nominees will eventually be chosen by the delegates in an online winnowing process culminating in the selection of a ticket in June. According to the rules, two members of the same party will not be allowed to run together.
"When candidates pick running mates from outside their parties, it's a clear sign that they're working to build the consensus necessary to get things done," the group argues. "They'll govern without regard to the partisan interests of either major party."
Could New York Mayor Michael Bloomberg team up with former Secretary of State Colin Powell? How about Joe Lieberman and Condoleezza Rice? What about tapping a media celebrity like Tom Brokaw or a deficit hawk like former Clinton chief of staff Erskine Bowles?
The list of possibilities is virtually endless, but the list of criticisms is long. Among other things, critics question the ability to stop a fringe group from hijacking the process and using Americans Elect to advance their own narrow cause. Possible nominees will have to be cleared by an independent committee and undergo a background check, but the committee's decision can be overruled by a majority of the delegates.
Will the online voting be secure from hackers? "We take that issue very seriously," Wachtel told CNN, noting that each delegate will be able to produce a paper record of his or her vote.
Josh Levine, the former chief technology and operations of E*Trade Financial, is tasked with the website's security.
A number of political observers question whether an Americans Elect ticket could ever have a serious shot at winning. For all the talk of voter alienation and disgust with Washington, broad segments of the electorate maintain strong party loyalties, and the country's winner-take-all electoral system remains a huge hurdle for anyone trying to break the two-party stranglehold. Ross Perot won nearly 20% of the vote in 1992 and didn't have a single electoral vote to show for his efforts; 270 electoral votes are needed to win the White House.
The last non-major party candidate to make any headway in the Electoral College was George Wallace, who ran in 1968 on a specific issue -- opposition to civil rights -- and with a very clear regional base of support. At the moment, Americans Elect appears to have neither.
Having a charismatic nominee might help, but would hardly guarantee electoral viability. When one of the most beloved politicians in U.S. history -- Theodore Roosevelt -- bucked the two-party system in 1912, he only succeeded in splitting the Republican vote and ensuring a victory for Woodrow Wilson, a Democrat.
Veteran political analysts Norm Ornstein and Thomas Mann have speculated that an Americans Elect ticket may end up splitting the electorate next year in such a way that an otherwise unacceptable major party nominee ends up capturing the presidency.
"The nightmare scenario for us would be angry or demoralized independents and discouraged centrist Republicans gravitating toward the third candidate, enabling a far-right Republican nominee to prevail with a narrow electoral majority or with a plurality followed by a win in a deeply divided House," they recently wrote in The Washington Post.
The U.S. Constitution requires the House of Representatives to pick the president if no candidate wins a majority of electoral votes.
Ornstein and Mann also question the ability of an independent president to govern effectively, and fear the eventual winner's legitimacy could be undermined by a severe three-way split in the popular vote.
"In this tough environment, any diminishment of legitimacy for the winner is undesirable," they said.
Asked to respond, Wachtel told CNN the need for change is paramount.
"At this point, the system's already spoiled," she said. "We need to open the process up to more competition and more choices for the American people."

Friday, December 23, 2011

Argentine government wins control of newsprint

BUENOS AIRES, Argentina (AP) -- The paper used to produce newspapers came under government control in Argentina on Thursday, in a long-sought victory for President Cristina Fernandez in her dispute with the country's 
 opposition media.
Argentina's senate, which is controlled by Fernandez's allies, voted 41-26 to control newsprint's manufacture, sale and distribution to media friends and foes alike.

Newsprint has been a key issue in the never-ending battles between the government and opposition newspapers. Since the early days of the 1976-1983 dictatorship, Argentina's only newsprint provider is Papel Prensa, a joint venture majority-owned by its dominant newspapers, Clarin and La Nacion. The government has been a minority shareholder.
Vice President Amadou Boudou said the law "will improve the quality of information and the plurality of opinions in Argentina."
Ruling party Sen. Anibal Fernandez said Clarin and la Nacion use 71 percent of the newsprint for their media groups and ensure a competitive advantage by distributing the other 29 percent to 168 other media organizations, who must either pay a 15 percent markup or import what they need.
"We are defending the proposal of defending the freedom of expression of all the Argentines," Sen. Fernandez said.
But opponents - including media groups from all over Latin America - call it a death blow for freedom of expression.
"We're convinced that state invervention in this area will cause more trouble than what it pretends to correct," said ADEPA, Argentina's main newspaper trade group.
The Inter-American Press Association put it more bluntly: "This attitude of the government is strange and senseless, because in Argentina there aren't any shortages. The newspapers can freely import paper," its statement said. "That's why we assume that we're facing a government effort to control the media."
It's only the latest move that has put Argentina's dominant media groups on the defensive.
This week, federal police raided Grupo Clarin's Cablevision headquarters at the behest of one judge who demanded that the company be dismantled, while another froze the assets of La Nacion at the request of the federal tax agency. Meanwhile, in criminal court, investigative judges are considering charging the papers' publishers with crimes against humanity, accusing them of conspiring with the military junta to wrest Papel Prensa from a leftist banker's family in 1976.

Thursday, December 22, 2011

Kim Jong-il death: 'Nature mourns' N Korea leader

Godzilla sadly slumped in the East China Sea with his rubber suit at half-zip as Mothra hovered nearby to fan and provide comfort with the great winds from his huge paper mache wings.

The greenscreened footage will be superimposed into the background of the videos of sobbing North Koreans broadcast by the state, shown in the article below.

Key blurbs from the article:

Scenes of mourning for Kim Jong-il have been broadcast on state television.

Strange natural phenomena have been witnessed in North Korea since the death of the country's leader Kim Jong-il, the state news agency KCNA reports.
Ice cracked on a famous lake "so loud, it seemed to shake the Heavens and the Earth", and a mysterious glow was seen on a revered mountain top, KCNA said.
The personality cult surrounding North Korea's founding father and son bestows near-divine status on them.

...In North Korea, state media continue to report mass grieving following Mr Kim's death, reportedly from a heart attack.

'Message in rock'
The 69-year-old had led North Korea since the death of his father in 1994 and an elaborate personality cult, involving multiple stories of alleged miracles or astonishing deeds, has been built up around him.
Even nature is mourning, the state-run Korean Central News Agency reported on Thursday.
A snowstorm hit as Mr Kim died and ice on the volcanic Chon lake near his reported birthplace at Mount Paektu cracked, it said.
Following the storm's sudden end at dawn on Tuesday, a message carved in rock - "Mount Paektu, holy mountain of revolution. Kim Jong-il" - glowed brightly, it said. It remained there until sunset.
On the same say, a Manchurian crane also apparently adopted a posture of grief at a statue of the late leader's father in the northern city of Hamhung.
"Even the crane seemed to mourn the demise of Kim Jong-il, born of Heaven, after flying down there at dead of cold night, unable to forget him," KCNA reported officials as saying.
On Wednesday state media said more than five million people had already turned out to pay their respects to Kim Jong-il.
State media have called on North Koreans to unite behind his designated heir, youngest son Kim Jong-un, who is being called the "Great Successor".

New Particle at the Large Hadron Collider Discovered by ATLAS Experiment

University of Birmingham

12/22/2011 | Press release
Posted on Thursday 22nd December 2011 
Researchers from the University of Birmingham and Lancaster University, analysing data taken by the ATLAS experiment, have been at the centre of what is believed to be the first clear observation of a new particle at the Large Hadron Collider. The research is published today (22 December 11) on the online repository arXiv.

The particle, the Chi-b(3P)is a new way of combining a beauty quark and its antiquark so that they bind together. Like the more famous Higgs particle, the Chi-b(3P) is a boson. However, whereas the Higgs is not made up of smaller particles, the Chi-b(3P) combines two very heavy objects via the same 'strong force which holds the atomic nucleus together.

Andy Chisholm, the PhD student from the University of Birmingham who worked on the analysis said: 'Analysing the billions of particle collisions at the LHC is fascinating. There are potentially all kinds of interesting things buried in the data, and we were lucky to look in the right place at the right time.

'The Chi-b(3P) is a particle that was predicted by many theorists, but was not observed at previous experiments, such as in my previous work on the D-Zero experiment in Chicago, continued Dr James Walder, the Lancaster research associate who worked on the analysis.

Dr Miriam Watson, a research fellow working in the Birmingham group observed: 'The lighter partners of the Chi-b(3P) were observed around 25 years ago. Our new measurements are a great way to test theoretical calculations of the forces that act on fundamental particles, and will move us a step closer to understanding how the universe is held together.

Professor Roger Jones, Head of the Lancaster ATLAS group said: 'While people are rightly interested in the Higgs boson, which we believe gives particles their mass and may have started to reveal itself, a lot of the mass of everyday objects comes from the strong interaction we are investigating using the Chi-b.


Notes to Editors
2. Chi-b(3P) is pronounced kye-bee three P
3. The beauty quark is also known as the bottom quark.

For further information
Kate Chapple, Press Officer, University of Birmingham, tel 0121 414 2772 or 07789 92116

Tuesday, December 20, 2011

IBM's Five Predictions for the Next Five Years

Biometric passwords? Mind-reading headsets? Junk mail you'll look forward to? As with all predictions, the answer is "maybe"

In each of the past five years, IBM has come up with a list of five innovations it believes will become popular within five years. In this, the sixth year, IBM has come up with the following technologies it thinks will gain traction. Hold on to your sci-fi novels, because some of these are pretty far out there. And some of them, well, I wish we had them today.
People power will come to life. Advances in technology will allow us to trap the kinetic energy generated (and wasted) from walking, jogging, bicycling, and even from water flowing through pipes. A bicycle charging your iPhone? There’s nothing wrong with that, though I think it might be a while before we see this actually become a mainstream practice.
You will never need a password again. Biometrics will finally replace the password and thus redefine the word “hack.” Jokes aside, IBM believes multifactor biometrics will become pervasive. “Biometric data—facial definitions, retinal scans, and voice files—will be composited through software to build your DNA-unique online password.” Based on the increasing hours we spend online, I would say we need such solutions to come to market ASAP.
Mind reading is no longer science fiction. Scientists are working on headsets with sensors that can read brain activity and recognize facial expressions, excitement, and more without needing any physical inputs from the wearer. “Within [five] years, we will begin to see early applications of this technology in the gaming and entertainment industry,” IBM notes. It will also be good for folks who have suffered from strokes and have brain disorders. Personally, I’m not sure this is commercially viable within the stated five years.
The digital divide will cease to exist. Mobile phones will make it easy for even the poorest of poor to get connected. In the U.S. and other parts of the world, this is already happening.
Junk mail will become priority mail. “In five years, unsolicited advertisements may feel so personalized and relevant it may seem that spam is dead. At the same time, spam filters will be so precise you’ll never be bothered by unwanted sales pitches again,” notes IBM. I have just one thing to say about this prediction: OMG.


New predictions aside, IBM’s track record of predictions over the past five years has been somewhat mixed. Let’s take a step back to 2006 and look at its predictions:
• We will be able to access health care remotely, from just about anywhere in the world.
• Real-time speech translation—once a vision only in science fiction—will become the norm.
• There will be a 3D Internet.
• Technologies the size of a few atoms will address areas of environmental importance.
• Our mobile phones will start to read our minds.
Remote health care is a reality, but real-time speech translation is, well, not quite as real. The 3D Internet: We’re still waiting for that, but those mobile phones are becoming awfully smart. As I said, it’s mixed in its predictions. In 2007, IBM correctly predicted driving would be assisted by software and your phones would become “your wallet, ticket broker, concierge, bank, shopping buddy, and more.” But that was right after the iPhone was launched.
As another example, IBM in 2009 predicted city buildings would “sense and respond” like living organisms. That sensor-based future is finally unfolding two years later. That same year, it predicted cars and buses would run on hydrogen and biofuels. Well, that’s half-true. We have some places where some buses and some cars are running on biofuels. Its prediction that cities will develop a healthier immune system due to connectedness, however, is quite far from reality—although we still have a little more than two years to go before we can say IBM got those wrong.
Bottom line: IBM’s Five in Five makes a nice cheat sheet to keep an eye on the future and also focus on key trends that might go big. I can’t wait for the 2012 edition.

Saturday, December 17, 2011

Congressman on hunger strike to show solidarity with Occupy

By Aubrey Whelan

Rep. Keith Ellison, D-Minn., embarked on a 24-hour hunger strike in solidarity with four Occupy DC protesters who have gone without food since Dec. 8 to advocate for D.C. voting rights.

Ellison, the first Muslim to serve in the House, met with the hunger strikers Thursday and pledged to read their declaration – which calls for full voting rights for District residents as well as legislative and budget autonomy – on the floor of the House of Representatives to enter it into the congressional record.

The hunger strikers have been meeting with various lawmakers and their staffers on Capitol Hill this week. Success has been mixed. On Wednesday, strikers staged a sit-in outside House Speaker John Boehner's office for hours, but didn't get a meeting with the Ohio Republican.

Wednesday, December 14, 2011

Chinese Government Ramps Up Weather Control Efforts

BEIJING - China will begin four regional programs to artificially increase precipitation across the country before 2015, according to the newly released 12th Five-Year Plan (2011-2015) for meteorological development.

Together with the existing program in Jilin province, to influence weather in northeastern parts of China, the five regional weather control programs will increase artificial precipitation volume by 10 percent, according to the plan.

Each year, an average of 3 trillion cubic meters of water passes over China in clouds, and only 20 percent of it falls to the ground, according to the China Meteorological Administration (CMA).

Currently, 50 billion cubic meters of rain and snow are gained annually in artificial precipitation, but the volume could reach 280 billion cubic meters if more effective weather intervention measures are taken, according to the CMA.

"Because clouds are boundless, weather control is boundless. The five regional weather control programs will coordinate the ground resources, such as the cloud seeding rockets and planes, across provinces to increase potential rain or snow," said Zheng Jiangping, deputy director of the CMA's department of emergency response, disaster mitigation and public services emergency management.

Zheng said the programs can play an important role in guaranteeing the nation's plan to boost the annual grain yield to 550 million tons by 2020 - that target was exceeded this year with a record 571 million tons.

As extreme weather events such as drought and flooding become more common, protecting the nation's main wheat producing areas grows in urgency - thus the first regional program chose the northeastern parts of the country, including Liaoning, Jilin, Heilongjiang provinces in Northeast China and the eastern part of the Inner Mongolia autonomous region in North China.

"The program in Jilin was finished late this year and is working well. The successful operation will accelerate the construction of the other four," Zheng said.

They will cover the northwestern, southern, southwestern and northern parts of China, but a detailed plan has not yet been released, he said.

A national weather intervention command center will also be established before 2015, according to the plan.

Zheng said the national center will focus on scientific research and development of weather control techniques, providing technological support to the regional weather stations and coordinating the country's cross-region weather intervention.

Agricultural experts welcomed the plan, but also cited a need for improving the nation's irrigation system.

Lu Bu, a researcher in agriculture resources and regional planning at the Chinese Academy of Agriculture Sciences, said water shortages are now the biggest obstacle in increasing the country's grain output.

In October last year, when most central and eastern parts of China were experiencing a drought authorities stepped up efforts to produce artificial precipitation. After intense efforts, precipitation in February in seven provinces and municipalities reached 2.2 billion tons, 17 percent of it triggered by weather intervention, according to the administration.

China Daily

Tuesday, December 13, 2011

Speed of Light Lingers in Face of New Camera


More than 70 years ago, the M.I.T. electrical engineer Harold (Doc) Edgerton began using strobe lights to create remarkable photographs: a bullet stopped in flight as it pierced an apple, the coronet created by the splash of a drop of milk.
Enlarge This Image
Di Wu and Andreas Velten, MIT Media Lab

SLOW DOWN M.I.T.'s camera captures light particles seemingly in motion by using repeated exposures, creating a “movie” of a nanosecond-long event.

Now scientists at M.I.T.’s Media Lab are using an ultrafast imaging system to capture light itself as it passes through liquids and objects, in effect snapping a picture in less than two-trillionths of a second.

The project began as a whimsical effort to literally see around corners — by capturing reflected light and then computing the paths of the returning light, thereby building images coming from rooms that would otherwise not be directly visible.

“When I said I wanted to build a camera that looks around corners, my colleagues said, ‘Pick something that is more safe for your tenure,’ ” said Ramesh Raskar, an associate professor of media arts and sciences at the Media Lab. “Now I have tenure, so I can say this is not so crazy.”

Dr. Raskar enlisted colleagues from the chemistry department to modify a “streak tube,” a supersensitive piece of laboratory equipment that scans and captures light. Streak tubes are generally used to intensify streams of photons into streams of electrons. They are fast enough to record the progress of packets of laser light fired repeatedly into a bottle filled with a cloudy fluid.

The instrument is normally used to measure laboratory phenomena that take place in an ultra-short timeframe. Typically, it offers researchers information on intensity, position and wavelength in the form of data, not an image.

By modifying the equipment, the researchers were able to create slow-motion movies, showing what appears to be a bullet of light that moves from one end of the bottle to the other. The pulses of laser light enter through the bottom and travel to the cap, generating a conical shock wave that bounces off the sides of the bottle as the bullet passes.

The streak tube scans and captures light in much the same way a cathode ray tube emits and paints an image on the inside of a computer monitor. Each horizontal line is exposed for just 1.71 picoseconds, or trillionths of a second, Dr. Raskar said — enough time for the laser beam to travel less than half a millimeter through the fluid inside the bottle.

To create a movie of the event, the researchers record about 500 frames in just under a nanosecond, or a billionth of a second. Because each individual movie has a very narrow field of view, they repeat the process a number of times, scanning it vertically to build a complete scene that shows the beam moving from one end of the bottle, bouncing off the cap and then scattering back through the fluid. If a bullet were tracked in the same fashion moving through the same fluid, the resulting movie would last three years.

“You can think of it as slow motion,” Andreas Velten, a postdoctoral researcher who is a member of the design team, said during a recent technical presentation. “It is so much slow motion you can see the light itself move. This is the speed of light: there’s nothing in the universe that moves faster.”

Dr. Raskar says the technology has a variety of promising commercial applications. Last year, for example, one of his graduate students, Jaewon Kim, published a thesis envisioning portable CAT-scanning devices.

Dr. Raskar said he could also envision smartphone software that would capture and interpret reflections from, say, fruit. “Imagine if you have this in your phone about 10 years from now,” he said. “You will be able to go to your supermarket and tell if your fruit is ripe.”

Until now, picosecond speeds have largely been the province of an elite group of scientists clustered at the nation’s weapons laboratories.

At Lawrence Livermore National Laboratory, Gary Jones is an optical physicist who builds ultrafast imaging systems that help characterize the first microseconds of events like laser fusion and nuclear explosions. “To get a two-dimensional image within a picosecond means you have to have a lot of electronics moving really fast,” he said.

For Dr. Raskar — who optimistically calls the project “femto photography,” using the term for quadrillionths of a second — it is about more than just engineering or science. “We were inspired by looking at the world in a unique way just because we could,” he said.

The system allows the naked eye to see information that has until now been rendered as data and charts. The proper analogy is to the way astronomers use instruments like radiotelescopes to create images with “fake” colors to see things in new ways — or to the original inspiration of Eadweard Muybridge, the 19th-century British photographer who achieved a new understanding of a horse’s gait by creating a camera array with electromagnetic shutters set off by tripwires.

“We’re still trying to get our heads around what this means,” Dr. Raskar said, “because no one has been able to see the world in this way before.”

Monday, December 12, 2011

Researchers Teach Subliminally; Matrix Learning not Far Away

For the first time ever, scientists from Boston University and ATR Computational Neuroscience Laboratories in Kyoto, Japan have managed to use functional Magnetic Resonance Imaging, or fMRI to decode the process of learning.

Here’s the basic procedure:

* Find someone capable of performing a task, for example juggling. Then stick them into an fMRI machine and have them imagine juggling. As they mentally go through how they do it, the scientists decode the brain patterns into something they can use later
* Find another person. Stick them in an MRI machine and have them try and imagine juggling. Decode as before, then compare.
* Use neurofeedback by rewarding people for increasing the similarity in brain patterns.
* Nothing else. By mimicking the state of the professional juggler, you are learning how to juggle.

Not Here Yet, but Soon

As the research stands to date, it isn’t capable of much. Rather than working with skills like juggling, the researchers relied on images so they could tie into the vision part of the brain, the part that they have managed to partially decode.

Nevertheless, they demonstrated that information could be taught using neurofeedback techniques. And it was effective even when people didn’t know they were learning.

The researchers are cautiously optimistic on the ability to automatically teach information. As said Mitsudo Kawato of ATR Computational Neuroscience Laboratories, "In theory, hypnosis or a type of automated learning is a potential outcome. However, in this study we confirmed the validity of our method only in visual perceptual learning. So we have to test if the method works in other types of learning in the future. At the same time, we have to be careful so that this method is not used in an unethical way."

Sunday, December 11, 2011

The condescending UI

(see graphics on the original post)

By Paul Miller
I have a kneejerk reaction to most modern computer user interfaces (also, all microwave user interfaces). I've used plenty of excuses over the years: my "eye for design," my love of minimalism, a sense of utility. Today, I finally put my finger on it, and it's not just a desire for the-computer-as-pure-machine, or a spartan aesthetic. It's quite simple, really: I don't like the condescending tone.

Growing up I was always very small for my age. I didn't mind the size ("the bigger they are, the harder they fall!" was my rallying cry), but I hated being thought of as younger than I was, be it in physical or intellectual capabilities. When you're a knowledge sponge as a kid, the first time someone tells you something, it feels amazing, and you love that person. The second time someone tells you the same fact, it's pure torture. "I know this already, I'm not an idiot! Sheesh."

My problem with many modern UIs is that they never get past the telling phase. They're always dressing up their various functions with glows and bevels and curves, and in the process they somehow become overbearing to my senses. "Did you know you can click this? Don't forget there's a save button over here! Let me walk you to your control panel." Imagine a car that verbally explains all of its various knobs and levers the first time you get into the car. Wonderful, right? Now imagine that car explaining all of these various functions every single time you get in the car for the next five years, until you finally snap and drive it off a cliff.


An example of this is the dramatic, quasi-utilitarian animated transition. The first few times, it's conveying important information: click that button? That launches this action! Swoosh. The next 10,000 times, it's mainly just slowing me down. A "wizard" is supposed to ask a pertinent question that eventually leads me to a specific control panel, but once I know the actual control panel I want, the friendly "wizard" is more like a guard at the gates. The Ribbon in Microsoft Office products (which is making its way to the file manager in Windows 8) is constantly talking down to me, assuming I don't know how to use a menu, a key command, or a honest-to-goodness toolbar.

But it's not just functionality, there's something deeper that bugs me, about the decorations themselves. Like the ubiquitous drop shadow. "Did you know that this window is on top of this window?" it whispers to me, endlessly. Apple's love of reflections and faux 3D subtly imply to me that I might be lost, needing landmarks and a sense of place to find my way. Microsoft's oddly-sized minimize / maximize / close buttons in Windows 7 are only there to help, but they also hint at some lack of eye-hand coordination on my part. Soft edges, endless gradients, and rounded corners seem designed to keep me from hurting myself on an acute angle, as if the desktop is a choke-proof toy for babies, instead of a sharpened pencil. A huge graphical icon representing an app might look incredible and enticing, but after a while it's sort of oversharing. It's constantly reintroducing itself, in case I didn't catch its name the first time.

And of course, there is the transgression of the century: Apple's downward spiral into overt 1:1 metaphors. The physical bookshelf, the leather desk calendar (complete with a torn page), the false-paginated address book, whatever Game Center is trying to pull off. At least Cover Flow's LP-flipping simulation was cute and emergent, if quickly tiresome, but these new tricks are horrible and offensive. They're not only condescending and overwrought, they're actually counter-functional. For some true analysis of Apple's recent sins in this department, check out John Siracusa's in-depth look over at Ars Technica.

Of course, seeking out an aesthetically pleasing interface seems a nice goal. The muted chrome of Lion is really (relatively) wonderful, and I like the vibrant color palette of Windows 7. These are truly great efforts, by great designers, but I still can't help but feel like they're taking the operating systems I knew, in all their "ugly" 1990s glory, and dressing them up in Little Lord Fauntleroy suits. Perhaps without the limitations of a finite number of colors and pixels to force simplicity, UI designers just don't know what to do with themselves. I'd argue they do too much. Luckily, OS X has improved some over recent years, reining in the horrible early 2000s excesses of Aqua (minus the aforementioned iCal and Address Book), but Microsoft's traditional, windowed desktop experience seems to be trending in the opposite direction.

Ultimately, an OS is much more than its chrome, it's about what it can do, and how efficiently it can do it. I know plenty of people who, like me, pine for BeOS. Many of us never even used BeOS, or at least not seriously. And yet we check up on the Haiku blog (Haiku is an open source project to rebuild BeOS from the ground up) every month or so, hoping against hope that BeOS will return to save us from overwrought UI, and the overwrought kernels underneath. BeOS is purely digital, with a sort of 8-bit charm, complete with pixel-perfect isometric icons. It's much like the appeal of "retro" indie games, which deal in our native, shared gaming language and metaphors, not something borrowed from action movies or an overblown sense of virtual reality. But it's all a fantasy, really: I'll probably never be able to do even a fraction of my daily work on Haiku. Just like how my SNES will never run Skyrim.

Maybe Microsoft's excellent "Metro" design language, which is on Windows Phone, and is headed to the Xbox and the Windows 8 Start screen, is the answer. Microsoft itself describes Metro as "authentically digital," which seems like a great place to start. Still, even Metro seems to be a little over-designed at times, with its oversized fonts bleeding off the screen, low information density, and an abundance of swoops and swipes. It's saying "look how minimal I am," but in the loudest way possible. I'm still waiting for my UI knight in shining armor.

In my personal quest to escape the condescension, I recently switched my Windows 7 install over to the "Classic Theme," which is basically Windows 95 incarnate, just with all the under-the-hood improvements I've come to rely on. I really like it. It feels right, and if it isn't beautiful, at least it's honest. I wish there was a similar OS 9 mode for OS X. Sometimes I like to dink around in Terminal, accomplishing nothing, but at least knowing that I'm engaging the computer on my own terms, with no buffer. Maybe someday someone can port over the old NeXT UI to OS X, or perhaps Mac OS X Server 1.0's approximation of OS 9's chrome. The retro affectations are silly and quixotic, but at least now I know why I put myself through these contortions. I'm a grown up! Sheesh.

Friday, December 9, 2011

quantum physics in photosynthesis

More evidence found for quantum physics in photosynthesis
By Brandon Keim,

Physicists have found the strongest evidence yet of quantum effects fueling photosynthesis.

Multiple experiments in recent years have suggested as much, but it's been hard to be sure. Quantum effects were clearly present in the light-harvesting antenna proteins of plant cells, but their precise role in processing incoming photons remained unclear.

In an experiment published Dec. 6 in Proceedings of the National Academy of Sciences, a connection between coherence—far-flung molecules interacting as one, separated by space but not time—and energy flow is established.

"There was a smoking gun before," said study co-author Greg Engel of the University of Chicago. "Here we can watch the relationship between coherence and energy transfer. This is the first paper showing that coherence affects the probability of transport. It really does change the chemical dynamics."

The new findings are the latest in a series that have, piece by piece, promised to expand scientific understanding of photosynthesis, one of life's fundamental processes. Until a few years ago, it seemed a straightforward piece of chemistry.

Then came observations of coherence in antenna-protein chlorophylls from green sulfur bacteria. They lasted far longer than anyone expected, long enough to hint at a functional role. Those observations were, however, made at unrealistically ultracold temperatures; then they were made at room temperatures, and in antenna proteins found in plants everywhere.

Confronted with this unexpected coherence, researchers hypothesized a role in enabling ultra-efficient energy transfer. Energy from incoming photons could simultaneously explore every possible chlorophyll route from a protein's surface to the reaction center at its core, then settle on the shortest path.
An antenna protein complex. The protein scaffolding is grey and chlorophyll molecules are green
An antenna protein complex. The protein scaffolding is grey and chlorophyll molecules are green
Greg Engels

To see if that happened, a team led by Engel and Shaul Mukamel of the University of California, Irvine analyzed the fluctuation of lasers as they passed through antenna proteins. Depending on how they shifted, the researchers could track what happened inside.

They found a clear mathematical link between energy flows and fluctuations in chlorophyll coherence. The link was so clear it could be described in derivative sines and cosines, mathematical concepts taught in college trigonometry.

"The mounting evidence that quantum effects can be seen in natural systems when excited by lasers is compelling," said Greg Scholes, a University of Toronto biophysicist who first found quantum effects in room temperature photosynthesis.

Further research is needed to understand the full role of quantum physics, said Scholes. "How much do they change our understanding? How much are they needed?" he said.

Engel sees a lesson in the importance of the antenna proteins in which chlorophyll molecules are embedded. "The protein does a lot more for this system than we thought," he said. "It's not just a simple structural element."

Molecular biologists "are trained to look at the molecule," Engel said. "We don't usually design systems. We design molecules. The question becomes: Which aspects of this do we strive to recreate? We are very interested in the design principles. How could you design one of these?"

17-year-old wins 100k for creating cancer-killing nanoparticle

At the age of 17 I was paying attention in college, but still enjoying the student life as much as studying towards my career goals. What I wasn’t doing was working at the cutting edge of cancer treatment and developing a potential cure.

Angela Zhang is, and she’s just been awarded the $100,000 Grand Prize in the Individual category of the Siemens Competition in Math, Science & Technology. Her project was entitled “Design of Image-guided, Photo-thermal Controlled Drug Releasing Multifunctional Nanosystem for the Treatment of Cancer Stem Cells.”

Her creation is being heralded as a “Swiss army knife of cancer treatment.” Zhang managed to develop a nanoparticle that can be delivered to the site of a tumor through the drug salinomycin. Once there it kills the cancer stem cells. However, Zhang went further and included both gold and iron-oxide components, which allow for non-invasive imaging of the site through MRI and Photoacoustics.

As to why she chose this as her project, Zhang explains that she was surprised when looking at the survival rates of patients receiving cancer treatment. As cancer stem cells are resistant to many forms of cancer treatment, it seemed like an area worth focusing on. Her nanoparticle is award-winning due to the fact it has the potential to overcome cancer resistance while offering up the ability to monitor the effects of the treatment in real-time using existing imaging techniques.

Zhang’s achievement is impressive considering she is only 17 years old, but also due to the level of understanding required to create such a nanoparticle in the first place. She has spent over 1,000 hours since 2009 researching and developing the particle, and wants to go on to study chemical engineering, biomedical engineering, or physics. Her dream job is to be a research professor.

The Siemens Competition is in its 13th year and aims to highlight talent at the high school level for those interested in science research. Last year 15-year-old Benjamin Clark won the Individual category for his work into how stars are born. In 2009 Ruoyi Jiang won for his research into chemotherapy drug resistance.

I think we can all agree this is a very worthwhile competition, and long may it continue if it pushes young minds to create solutions to some of our biggest problems.

Read more at the Siemens Foundation and The George Washington University

Saturday, December 3, 2011

How a Computer Game is Reinventing the Science of Expertise

By Sandra Upson

If there is one general rule about the limitations of the human mind, it is that we are terrible at multitasking. The old phrase “united we stand, divided we fall” applies equally well to the mechanisms of attention as it does to a patriotic cause. When devoted to a single task, the brain excels; when several goals splinter its focus, errors become unavoidable.

But clear exceptions challenge that general rule. Two weeks ago, thousands of computer game enthusiasts descended on a convention center in downtown Providence, Rhode Island, to observe some of these exceptions in action. They were attending the championships of one of the world’s hottest computer games, StarCraft 2. Hands fluttered over keyboards like hummingbirds mid-hover at about fifty computers set up in a dimly lit open hall. Players, many of whom flew in from South Korea to compete, vied to advance through their brackets to the finals. This game is no joke, with the prize money to prove it—$50,000 went to the winner, a 16-year-old Korean who goes by the name Leenock. The agility on display in Providence —as seen in the players’ multitasking, their nonstop decision-making, and the stunning speed of their fingers—has not gone unnoticed by cognitive scientists.

For decades, a different game, chess, has held the exalted position of “the drosophila of cognitive science”—the model organism that scientists could poke and prod to learn what makes experts better than the rest of us. StarCraft 2, however, might be emerging as the rhesus macaque: its added complexity may confound researchers initially, but the answers could ultimately be more telling.

This real-time strategy game demands the frenetic pursuit of numerous simultaneous goals, any of which can change in the blink of an eye. Players play a god-like role over a cluster of creatures, leading them to develop their economy and prepare them for skirmishes with a neighboring society. Wildly popular among gamers—StarCraft 2 was the top-selling computer game in 2010, the year it was released—for researchers the appeal lies in the data each game generates. When two players face off, their computers each produce a record of the actions taken during the game. Called replay files, those logs reflect what a gamer was thinking at every stage of play. “I can’t think of a cognitive process that’s not involved in StarCraft,” says Mark Blair, a cognitive scientist at Simon Fraser University. “It’s working memory. It’s decision making. It involves very precise motor skills. Everything is important and everything needs to work together.”

That intellectual rigor and the corresponding data trail, multiplied across hundreds of thousands of players worldwide, makes StarCraft an unparalleled resource that scientists are only now tapping for the study of attention, multitasking, and learning. (As a rough estimate, at the time of writing about 11,000 games are being played on the servers of Blizzard Entertainment, the company that created StarCraft.) Recent experiments on computer games are beginning to suggest that players develop skills that could be useful in other contexts—skills that might allow those individuals to cope better with certain types of information overload. Thousands of these gamers are now contributing to a project under Blair’s watch, called SkillCraft, to learn what separates experts from novices and everyone in between. By all appearances this study of StarCraft players is the world’s biggest experiment on how expertise develops and, ultimately, on how we learn.

Video game research has reached a turning point: psychologists are no longer asking only whether the violence in some games corrupts young minds, or whether games are dangerously addictive, thus corrupting young minds. At least in some cases, gamers are being recognized for the specific forms of learning they cultivate, with the data trail that could finally unravel some of the major mysteries of the human brain.

Why StarCraft

To really appreciate why scientists are turning to StarCraft 2, you need to know a few basics of the game. Two players, connected over the internet, fight for control of a territory of which they have an aerial view. Each player begins with a small base of one of three species—terran (humans), zerg (insectoid creatures), or protoss (photosynthetic aliens). To win, one species’ army must defeat the other, a simple enough goal. But the number of variables that can shift during the game is enormous, demanding constant slight adjustments to strategy. Unlike in a board game, StarCraft players don’t take turns—they simply do as much as they possibly can and hope their opponent is not as fast.

The species’ first task is to start extracting minerals and a fictitious vespene gas, which form the foundation of a StarCraft economy. Each player must balance building up his or her economic production with developing fighters, defenses, and eventually more bases. A player must also try to discern the economic and military strategy of the opponent, whose base is initially hidden from view. As mineral and gas production ramps up, the gamer gains capital to spend on developing more advanced technology, a larger economy, or more fighters. StarCraft 2’s overarching strategic challenge is to decide how much time and money to devote to building up either an economy or an army.

But that’s just one level of play. When you attack, you do not simply dispatch fighters—you manually control them by using your mouse to click on them and set them in motion. A skilled player will monitor the health of individual fighters in the frontlines of a battle and pull them back to the rear as they lose strength, giving them time to recover while fresher troops bear the brunt of the attack. At any given time, you can have several dozen fighters with different abilities pottering around a map, numerous laborers busily mining minerals and gas, and various facilities producing new tools and resources that should be deployed immediately. With so many moving parts, even a top-level player can succumb to paralysis.

A screenshot from a StarCraft 2 game.

Translating those goals into cognitive load, the brain’s executive functions manage most of the game’s demands. Several types of memory may be engaged to keep track of the weapons at one’s disposal and the locations of multiple objects on a map; attentional systems allow a player to plan future moves, switch focus to different activities around the map, and evaluate the enemy’s strategy. Motor skills are needed to rapidly click around the map to move and implement actions.

In short, the game is a relentless exercise in multitasking and constant decision-making. The winner, often, is the person who can make the most moves—an elite player can perform about 5 or 6 actions a second, which translates into a flurry of key presses and mouse maneuvers (see video, above).

Commentators react to events in a game. Credit: Major League Gaming

But you don’t have to be elite to be intriguing from a psychological perspective. According to the Entertainment Software Association, 72 percent of American households play computer or video games. With people engaging with games at all ages, scientists have become increasingly curious about how visually arresting—and in the case of StarCraft and similar games, cognitively demanding—software might be interacting with the brain. “From the perspective of the cognitive motor system, StarCraft is the most interesting thing you could do online,” Blair says.

The Trouble with Brain Training

The question now tantalizing psychologists is whether the rest of us can learn anything from these hyper-specialized multitasking gamers. Perhaps we, too, can accomplish some 300 things each minute—such productivity! Maybe we can learn to pick up new skills faster or use such games to stave off aging.

Given appropriate practice, humans seem to improve on almost any task they tackle. Ask us to sculpt a cake in the shape of Pinocchio, and with sufficient time and motivation we probably will. But multitasking abilities tend to resist practice. Perhaps Starcraft, somehow, possesses the elixir that can morph us into successful multitaskers.

Part of the problem is that once developed, human skills generally stay specific to the original task. Expert chess players, for example, have significantly superior recall of the positions of chess pieces on a board after a brief exposure than non-experts. That is as you might expect: years and years of practice have produced deep familiarity with the arrangements of pawns, rooks, and knights and what strategic opportunities they represent. But chess players turn out to be no better than others when asked to remember the arrangements of chess pieces placed at random, in configurations that could not appear in a game. Experiments in numerous other domains have demonstrated a similar lack of transfer.

In the last decade, however, some experiments have begun to suggest that video games might indeed teach transferable skills. Cognitive scientist Daphne Bavelier at the University of Rochester and her colleagues have used video games to investigate what kinds of learning humans are good at, and along the way they’ve turned up some promising, if modest, examples of brain training. (When scientists search for newly acquired abilities that transfer from one domain to another, they boil them down to skills so basic that you could be forgiven for responding with the raise of a single eyebrow.)

Early results suggest that gamers may have faster visual reaction times, enhanced visuomotor coordination, and heightened ability to visualize spatial arrangements. They may also be better at rotating an object in their minds and may distinguish more deftly between the trajectories of moving objects. Players might also have an edge when paying attention to several objects at once.

Because the tests did not mimic exactly the characteristics of the games participants played, the researchers are optimistic that more general skills were heightened. “There are few obvious links between chasing monsters across a star-spotted “spacescape” and determining the orientation of a single black ‘T’ on a uniform gray background, or between driving a car through a crowded cityscape while shooting at rival vehicles and counting the number of white squares that are quickly flashed against a black background,” Bavelier and colleagues wrote in support of the transfer effects in a 2008 article in Psychology and Aging.

The literature, however, is no Greek chorus of collective ascent. Some studies have failed to replicate the benefits of gaming, as a recent review article points out, and most experiments in the field have struggled to prove that they’ve lopped off all of the placebo effect’s nefarious tentacles.

The vast majority of these studies focused on first-person-shooter games, which may share only a few characteristics with real-time strategy games. Nonetheless, small hints from the StarCraft 2 community also suggest that players might be developing some generalizable expertise. Most current top players initially participated in the original StarCraft, an obviously related but nonetheless distinct game, or Warcraft, another variation on the real-time-strategy theme. Of course, a selection bias may be muddying the waters: the elite players who are drawn to these types of games may already have stronger-than-average multitasking skills. Even so, the fact that long-time players entered the new game at a substantial advantage suggests that they had honed some sort of Starcraftian skill.

In a paper published this year, cognitive scientist Joshua Lewis and colleagues at the University of California – San Diego analyzed what actions players took in 2000 games to see if certain capabilities stood out as hallmarks of success. Unlike previous studies, which tested participants before and after they played games to see if their behaviors changed, the approach taken by Lewis and colleagues allowed them to look for specific differences in what players are doing and perceiving.

They tracked several measures, including how many actions players took per minute and the distances between the locations where actions occurred across the map. Not surprisingly, they found that players who made the most moves tended to win. Of more interest was the second calculation. Distributing actions more widely across a map, which the authors argue reflects a player’s ability to distribute attention, also correlated highly with winning.

Now the question is whether people can learn to divide their attention more effectively. Professional Starcraft players belong to teams, with coaches and practice schedules, and they devote the majority of their time to developing their abilities. “If there is some methodology for building up multitasking skills, we might be able to figure out a way to train people to better distribute their attention,” Lewis says. “Maybe these teams have learned that implicitly.”

A New Model Organism?

In a traditional experiment on expertise, investigators corral about ten highly ranked professionals into a study and compare them with a similar number of novices. With StarCraft 2, scientists can mine the replay files of players at all stages, from chumps up to champions.

Blair, the Simon Fraser University scientist running the SkillCraft project, asked gamers at all ability levels to submit their replay files. He and his colleagues collected more than 4500 files, of which at least 3500 turned out to usable. “What we’ve got is a satellite view of expertise that no one was able to get before,” he says. “We have hundreds of players at the basic levels, then hundreds more at level slightly better, and so on, in 8 different categories of players.” By comparing the techniques and attributes of low-level players with other gamers up the chain of ability, they can start to discern how skills develop—and perhaps, over the long run, identify the most efficient training regimen.

LosirA, a 19-year-old multitasking phenom, merely thinks for a moment. Credit: Major League Gaming

Both Blair and Lewis see parallels between the game and emergency management systems. In a high-stress crisis situation, the people in charge of coordinating a response may find themselves facing competing demands. Alarms might be alerting them to a fire burning in one part of town, a riot breaking out a few streets over, and the contamination of drinking water elsewhere. The mental task of keeping cool and distributing attention among equally urgent activities might closely resemble the core challenge of Starcraft 2. “For emergencies, you don’t get to train eight hours a day. You get two emergencies in your life but you better be good because lives are at stake,” Blair says. “Training in something like Starcraft could be really useful.”

Indeed. But they know what we really want. Even more useful would be stumbling on the secrets of multitasking hidden in the replay files, then distilling them into helpful hints for the rest of us.

Friday, December 2, 2011

Jordu Schell

Gene therapy can protect against HIV

An introduced gene conveys long-lived resistance to HIV infection in mice.

by Lauren Gravitz

Gene therapy, an approach most commonly explored for curing chronic genetic diseases such as cystic fibrosis, may also prove practical for disease prevention. In research published today in Nature1, scientists in California show that a single injection — which inserted the DNA for an HIV-neutralizing antibody into the muscle cells of live mice — completely protected the animals against HIV transmission.

The road to a vaccine against HIV has proved to be far longer than originally anticipated. More than 2 million adults are newly infected with HIV every year and, nearly three decades after the virus was first identified, researchers haven’t found a reliable way to prevent infection. The classic vaccine approach, which uses all or part of an inactivated virus to induce immunity, has yielded little success because HIV has managed to disguise most of the easily-recognised external structures that antibodies would target. Researchers have thus had a tough time finding a molecule that can induce even moderately broad responses against the virus in all its different mutations. So although it might sound extreme to use gene therapy as a preventative treatment for HIV/AIDS, the method could provide a much-needed alternative.

Researchers hope to prevent the spread of HIV (virus particle pictured) by using gene therapy to get cells to produce antibodies.


David Baltimore, a virologist and HIV researcher at the California Institute of Technology in Pasadena, and his colleagues used a genetically altered adenovirus to infect muscle cells and deliver DNA that codes for antibodies isolated from the blood of people infected with HIV. The DNA is incorporated into the muscle cells’ genome and programs the cells to manufacture the antibody, which is then secreted into the bloodstream. The tactic builds on earlier work by scientists at the Children’s Hospital of Philadelphia in Pennsylvania, who in 2009 first described the effectiveness of this technique in preventing transmission of simian immunodeficiency virus, which is similar to HIV but infects monkeys2.

As for the rationale for using gene therapy for HIV: “This is something way out of the ordinary, and it’s perfectly reasonable to say that there’s no reason to do it if there’s an alternative," says Baltimore. "But if there’s no alternative — and that’s where we’re at today — then we should be thinking of new ways to protect people.”

Dennis Burton, an immunologist at the Scripps Research Institute in La Jolla, California, who has developed a number of antibodies against HIV, agrees. “Obviously, the best thing of all is a vaccine. That’s a tried-and-tested method that carries very few risks. But if that doesn’t work, what’s our fall-back position?” he asks. “We have these antibodies, and we have them available now. If this works in humans, and that’s a reasonable supposition, you’d have something you can do now.”
Prolonged protection

Baltimore and his colleagues tested five different broadly neutralizing antibodies, one at a time, in mice with humanized immune systems. Two of the antibodies, called b12 and VRC01, proved completely protective — even when the mice received doses of HIV that were 100 times higher than a natural infection. After 52 weeks, the levels of antibody expression remained high, suggesting that a single dose would result in long-lasting protection. “We showed that you can express protective levels of antibodies in a mammal and have that expression last for a long period of time,” Baltimore says. “It sets the stage for human trials.”

Providing patients with periodic doses of these antibodies throughout their lifetime would be safer than coaxing antibody production from muscle cells, but it would be far from cost-effective. The gene-therapy approach, by contrast, recruits muscle cells to act as antibody factories and could be administered using a single intramuscular shot.

Experts in the field are cautiously optimistic. “Mice and monkeys don’t always tell the truth. It’s a really interesting idea, and it should be assessed in clinical trials,” says Wayne Koff, senior vice-president for research and development at the International AIDS Vaccine Initiative in New York. “Until someone shows that we can make these broadly neutralizing antibodies with a [classic] vaccine, I think this is an important concept that should be supported.”

But both Burton and Koff caution that gene therapy comes with its own set of problems. Because the antibody DNA is permanently inserted into the genome, there’s no way to turn it off if someone has an immune reaction against the antibodies. But it won't be known whether such side effects exist until the method is tested in people, something that Baltimore aims to do in the next few years. The researchers at the Children’s Hospital of Philadelphia, meanwhile, hope to get the first round of human trials of their technique started before the end of 2012.