Wednesday, July 27, 2011

Avatar Kinect

http://www.i-programmer.info/news/91-hardware/2788-avatar-kinect-available-now.html



Written by Harry Fairhead
Tuesday, 26 July 2011 10:19

Is Avatar Kinect a game changing innovation or is it just silly? With a little more work it could provide the sort of virtual or is it augmented, reality that creates a new way to interact and experience novel environments.

We first reported on the Avatar Kinect system back in January this year (2011) when it was first demoed. Now you can download it and try it out for yourself from the Kinect Fun Labs. The download is free but to use it you will need to be a Gold member of Xbox LIVE - but you can use it free until September the 8th via Xbox LIVE (silver or gold membership).

The idea of Avatar Kinect is simple enough. It uses the Kinect to determine body position and facial expression and maps these in real-time onto an avatar displayed on the screen with other similar avatars. The following short video shows in very roughly how this is achieved:

The point is that by making the body position and expression accurate something magical happens and you respond to the other avatars as if you were there in the group meeting. You can pick any of 24 "virtual stages" to explore and meet up to 7 other friends represented as avatars. You can record a scene and post it to the Kinect Share site.

The big problem for Microsoft is trying to figure out what this technology might be useful for. At the moment there are also big technical limitations with the system. Unless you are at the right distance the facial expression system misses many changes. The mouth tracking provides only reasonable lip sync and body movement are very limited. You can't touch yourself and there is no movement below the waist - this clearly simplifies the control problem and avoids the problems of users creating offensive avatars. Basically all you can do is treat your avatar as a static puppet that you control the upper body and facial expression of. This makes it great for "sitting around and talking" but not much else.

So what can it be used for?

Microsoft are suggesting that its ideal for virtual meetings and the rationale for using virtual rather video meetings is lower bandwidth - which doesn't seem like a huge plus. Another suggested use is in therapy sessions where avatars might make talking easier. Microsoft seem to get closer to a potential use when they suggest performance videos. You can get your avatar to sing, tell jokes or act its way thought a play. the advantage is that this frees you from the responsibility of looking the part in real light - you could even swap gender or race. To promote this idea there is a competition for the best stand-up comedian and all you have to do is create a 90-second film of your avatar telling jokes.

If you analyse what a system like Avatar Kinect offers then you have to conclude that it is basically a way for people to try on different bodies and interact with a group. It is an opportunity for anyone who feels less than confident about their appearance to become a performer for the delight of others. The internet has long provided an anonymous platform where users can express themselves and Avatar Kinect extends this to facial and body expressions.

If you ask why don't more people use video calling then the answer might well be that they aren't happy about revealing their physical appearance without spending time on polishing it to a level that they find acceptable - put bluntly would you want to take a video call just after you got out of bed, or even while you were still in bed? Perhaps an avatar, always presentable at any time of day and no matter what you are actually doing, could be the perfect stand in.

It is clearly a great piece of technology but you can't help think that we haven't worked out what it's good for. New ideas needed!

The Tech Behind Avatar Kinect

http://www.i-programmer.info/news/91-hardware/1855-avatar-kinect-a-holodeck-replacment.html

Friday, July 22, 2011

150 human animal hybrids grown in UK labs: Embryos have been produced secretively for the past three years

http://www.dailymail.co.uk/news/article-2017818/Embryos-involving-genes-animals-mixed-humans-produced-secretively-past-years.html

By Daniel Martin and Simon Caldwell

Last updated at 12:42 AM on 23rd July 2011

Scientists have created more than 150 human-animal hybrid embryos in British laboratories.

The hybrids have been produced secretively over the past three years by researchers looking into possible cures for a wide range of diseases.

The revelation comes just a day after a committee of scientists warned of a nightmare ‘Planet of the Apes’ scenario in which work on human-animal creations goes too far.
Undercover: Scientists have been growing human animal hybrids in secret for the last three years (Posed by models)

Undercover: Scientists have been growing human animal hybrids in secret for the last three years (Posed by models)

Last night a campaigner against the excesses of medical research said he was disgusted that scientists were ‘dabbling in the grotesque’.

Figures seen by the Daily Mail show that 155 ‘admixed’ embryos, containing both human and animal genetic material, have been created since the introduction of the 2008 Human Fertilisation Embryology Act.

This legalised the creation of a variety of hybrids, including an animal egg fertilised by a human sperm; ‘cybrids’, in which a human nucleus is implanted into an animal cell; and ‘chimeras’, in which human cells are mixed with animal embryos.

Scientists say the techniques can be used to develop embryonic stem cells which can be used to treat a range of incurable illnesses.

Three labs in the UK – at King’s College London, Newcastle University and Warwick University – were granted licences to carry out the research after the Act came into force.

All have now stopped creating hybrid embryos due to a lack of funding, but scientists believe that there will be more such work in the future.

The figure was revealed to crossbench peer Lord Alton following a Parliamentary question.
Research centre: Warwick University has been growing animal human hybrids over the last three years

Research centre: Warwick University has been growing animal human hybrids over the last three years

Last night he said: ‘I argued in Parliament against the creation of human- animal hybrids as a matter of principle. None of the scientists who appeared before us could give us any justification in terms of treatment.

‘Ethically it can never be justifiable – it discredits us as a country. It is dabbling in the grotesque.

‘At every stage the justification from scientists has been: if only you allow us to do this, we will find cures for every illness known to mankind. This is emotional blackmail.

‘Of the 80 treatments and cures which have come about from stem cells, all have come from adult stem cells – not embryonic ones.
‘On moral and ethical grounds this fails; and on scientific and medical ones too.’

Josephine Quintavalle, of pro-life group Comment on Reproductive Ethics, said: ‘I am aghast that this is going on and we didn’t know anything about it.

‘Why have they kept this a secret? If they are proud of what they are doing, why do we need to ask Parliamentary questions for this to come to light?

‘The problem with many scientists is that they want to do things because they want to experiment. That is not a good enough rationale.’
Test centre: Newcastle University was another site where human animal hybrid testing was being undertaken

Test centre: Newcastle University was another site where human animal hybrid testing was being undertaken

Earlier this week, a group of leading scientists warned about ‘Planet of the Apes’ experiments. They called for new rules to prevent lab animals being given human attributes, for example by injecting human stem cells into the brains of primates.

But the lead author of their report, Professor Robin Lovell-Badge, from the Medical Research Council’ s National Institute for Medical Research, said the scientists were not concerned about human-animal hybrid embryos because by law these have to be destroyed within 14 days.

He said: ‘The reason for doing these experiments is to understand more about early human development and come up with ways of curing serious diseases, and as a scientist I feel there is a moral imperative to pursue this research.

‘As long as we have sufficient controls – as we do in this country – we should be proud of the research.’

However, he called for stricter controls on another type of embryo research, in which animal embryos are implanted with a small amount of human genetic material.

Human-animal hybrids are also created in other countries, many of which have little or no regulation.

Thursday, July 21, 2011

Wolfram launches its own interactive document format

http://www.wolfram.com/cdf/

--

http://www.pcpro.co.uk/news/enterprise/368815/wolfram-launches-its-own-interactive-document-format

By Barry Collins

Posted on 21 Jul 2011 at 09:41

Wolfram Research has launched its own document format, which it claims is "as everyday as a document, but as interactive as an app".

The Computational Document Format (CDF) allows authors to embed interactive charts, diagrams and graphics into their documents, allowing readers to adjust variables to see how increasing a price affects profits, for example, or display different segments of a brain scan.

The idea is to communicate the world's quantitative ideas much better than has been possible before

"The idea is to communicate the world's quantitative ideas much better than has been possible before," said the company's director of strategic and international development, Conrad Wolfram.

The key to the CDF format is that anyone - not only mathematicians or programmers - will be able to create interactive documents, according to Wolfram.

"It's easy enough for citizen authorship," Wolfram claims, although he admits the company aims to make authoring even easier by allowing interactive charts to be created using the linguistic commands familiar to users of the company's Wolfram Alpha "knowledge engine".

So, for example, users will be able to create a basic application that uses a slider to vary the degree of blur in a photo, simply by typing "blur image of Abraham Lincoln with radius x".

"[Currently] anyone who can make an Excel macro should easily be able to make interactivity for CDF," said Conrad Wolfram. "Where I'd like to get is that anyone who can make an Excel chart can make interactivity in CDFs."

Users will require Wolfram's Mathematica 8 software to create CDFs, while end users will require the free Wolfram CDF Player to view the documents.

More details and a demonstration video are available from the Computable Document Format site.

Read more: Wolfram launches its own interactive document format | Enterprise | News | PC Pro http://www.pcpro.co.uk/news/enterprise/368815/wolfram-launches-its-own-interactive-document-format#ixzz1Sn9qSrkt

Caltech Scientists Makes Biochemical Neural Network 'Brain' from DNA Strands





http://www.ibtimes.com/articles/183895/20110720/caltech-scientists-makes-biochemical-neural-network-brain-from-dna-strands.htm

Scientists from the California Institute of Technology have created an artificial neural network (or a "tiny brain," in the words of the lead scientist) from DNA strands that interact with biochemical inputs.

The artificial neurons of this network can take incomplete inputs, interact with each other, and come up with a complete conclusion. This is what the human brain does on a much more complex scale. It's also a principle scientists have used for computing and robotics.

The building block of the Caltech neural network is double-stranded DNA molecules with loose ends. These loose ends then receive the input of single-stranded DNA that binds with the double-stranded DNA, which, through DNA strand replacement, releases an output DNA strand from the double-stranded DNA.


Using this input-output mechanism, the Caltech team assembled four neurons that give out specific DNA strand outputs that serve as both 'yes' or 'no' indicators in themselves and also inputs strands into other neurons.

The end result is a neural network capable of churning out outputs for all four neurons in the absence of input for all four neurons.

The specific experiment the Caltech researchers conducted was guessing the identity of four scientiststs. The four neurons corresponded to the following four questions.

Then, by inputting only 'yes' for Q3 and 'no' for Q4, for example, the neural network is able to deduce 0 1 1 0, or Rosalind Franklin. The research claims that there are 27 combinations of incomplete inputs that would yield a correct identification.

Moreover, if the incomplete information doesn't describe anyone (for example 'yes' for 2 and 'no' for 3), the neural network would return both 'yes' and 'no' for all four neurons, indicating an input error.

"Biochemical systems with artificial intelligence-or at least some basic, decision-making capabilities-could have powerful applications in medicine, chemistry, and biological research," stated Caltech in a press release.

Lulu Qian, a Caltech senior postdoctoral scholar in bioengineering, is the lead author of the study, which is publsihed in the July 21 issue of journal Nature. Below are YouTube videos from her that explains this study. The three images used in this article are from the YouTube videos.

Wednesday, July 20, 2011

Fake Apple Stores Mushrooming In China; No iPhone 5 Inside Read more: http://www.itproportal.com/2011/07/20/fake-apple-stores-mushrooming-china-no-ip

http://www.itproportal.com/2011/07/20/fake-apple-stores-mushrooming-china-no-iphone-5-inside/

Written by
Desire Athow

A new worrying phenomenon has cropped up in China and Apple has been its first victim; meet the first fake Apple Stores, entire buildings that have been designed to look like the real thing.

Chinese companies have long been known for being master copiers but this takes the concept of plagiarism and copying to a whole new level. As expected, everything, from the architecture of the building, to the products, the T-shirt worn by the staff down to the logo and the badge design come from Cupertino.

A website called BirdAbroad has pictures of what looks like an Apple store but is in reality a completely genuine rip-off; the author of the post also confirms that the store was torn down and replaced by a bank but that two others have quickly appeared near to the original location located in a Chinese town called Kunming.

She did notice some uncommon features like the fact that the signage reads Apple Store (or even Stoer), the nameless staff badges, the poorly made signature spiral staircase, the walls that hadn't been properly painted and so on.

The author did not mention whether the store was actually selling real products (either from the grey market or those which fell from the back of lorries) or so called KIRF (Keep It Real Fake) products like the iPhone 4 (some with a normal SIM and running Android).

She also didn't say if the shop sold unannounced Apple products like the iPad 3 or the iPhone 5 but did point out that the store's employees may have been conned into believing that they were actually going to work for Apple.

The firm has only four stores in China, two in Beijing and two in Shanghai; these four stores in China have generated on average the highest traffic and highest revenue of any of the 323 Apple stores worldwide according to a statement by the Chief Financial Officer peter Oppenheimer back in January.

Hon Hai, the parent company of Foxconn Electronics, is also said to be investing around $1.2 billion to open retail outlets in China for Apple products.

Read more: http://www.itproportal.com/2011/07/20/fake-apple-stores-mushrooming-china-no-iphone-5-inside/#ixzz1SgEBR8EG

Monday, July 18, 2011

Top Chinese gymnast found begging on the street

http://www.telegraph.co.uk/news/worldnews/asia/china/8645237/Top-Chinese-gymnast-found-begging-on-the-street.html

One of China's most promising young gymnasts, who seemed destined for Olympic glory before his career ended in injury, has been found begging on the streets of Beijing, prompting criticism of the country's Soviet-style sports system.
Top Chinese gymnast found begging on the street
Zhang Shangwu: His gold medal-winning performance was the highlight of his career, and he seemed certain to make the cut for the 2004 Athens Olympics until he broke his left Achilles tendon in training in 2002 Photo: AFP

By Malcolm Moore in Shanghai

1:02PM BST 18 Jul 2011

Zhang Shangwu, 28, a specialist on the still rings, had even sold the two gold medals he won at the World University championships in 2001 for just £10 in order to buy food.

Mr Zhang said there were others like him who had found themselves in a desperate situation after being cut loose from China's state-run sports system.

Speaking on a mobile phone he bought for 30 yuan (£2.90) in order to find work, Mr Zhang said he had received a phone call recently from another struggling gymnast.

"He thought I might draw some attention to the problem. But I can barely look after myself at the moment, let alone take on anyone else's worries," he said.

Born into a peasant family in Baoding, Hebei province, Mr Zhang was sent to a local gymnastics academy at the age of five. After seven years of gruelling training, he showed enough promise to be selected to China's national team and in 2001 he was entered by officials into the World University Games, despite not having an education outside his sport.

His gold medal-winning performance was the highlight of his career, and he seemed certain to make the cut for the 2004 Athens Olympics until he broke his left Achilles tendon in training in 2002.

He never fully recovered, missed the games, and in 2005 he retired with a 38,000 yuan (£3,650) pay-off from the government in his home province of Hebei. "The money meant the local team no longer had to take any liability for my future," he said.

"After I left the sports system, I got a job as a food delivery boy, but after a while my injury got worse and worse so eventually I couldn't run or even walk for long periods".

His savings were wiped out, he said, when his grandfather had a brain haemorrhage. "That used up all my remaining money, and then I was forced to sell my medals because I did not have any money for food."

Shortly afterwards, in 2007, he turned to theft and was arrested in Beijing, only being released in April this year. "Since I got out, I have been begging and I was sleeping overnight in an internet café," he said.

Mr Zhang's situation has shocked China, which spares no effort in honouring the winners of Olympic gold medals, showering them and their families with gifts. Critics said that it was unacceptable for the majority of athletes, who retire in anonymity, to be left in difficult circumstances.

Xing Aowei, a former team-mate of Mr Zhang and a winner at the Sydney Olympics in 2000, told a Chinese website that he was concerned about the impact his story would have on gymnastics.

"With a world champion descending into such a life, who would want to be a gymnast in the future?" he asked.

Other Chinese sportsmen have also struggled after leaving the protective blanket of the national team. Ai Dongmei, a former marathon champion, sold the 10 medals she had won in international competitions in order to support her family after her husband was laid off. Zou Chunlan, the national female weightlifting champion, worked at a public bathhouse as a masseuse.

Mr Zhang said he was now living in a hotel paid for by a Chinese newspaper and was happy to accept charity until he finds himself a stable job.

Middle class China turns to private health insurance

http://www.bbc.co.uk/news/business-14141740

Middle class China turns to private health insurance
By Linda Pressly Reporter, BBC News

Over half of all healthcare in China is paid for by the consumers themselves

Healthcare - and how you pay for it - is one of life's big worries. In China few people have private health insurance but the market is growing.

Polly Deng is 30 and lives with her husband and her mother, Lu Xiao Dang, in Shanghai. Polly's mother has just returned home after a stay in hospital.

"She was in hospital for 12 days having an operation on her foot. It was a minor operation, so we hope she's going to be fully recovered within three months," she says.

The cost of Lu Xiao Dang' procedure was 5,000RMB ($773). The Chinese government's health insurance scheme paid for 60% of the cost of the operation.

She paid cash for the other 40% and will claim this against her additional private health insurance - although she does not know if she will be compensated for the full amount.

Across Asia millions of newly middle class families are making personal finance decisions for the first time. We look at the big issues facing them.

Polly picked up the health insurance habit from her mother. She belongs to the government's scheme, her company's scheme, and she has her own private cover.

Private health insurance is relatively new to China, but it's growing fast.

"Between the years 2000 and 2009 the average annual growth rate of the private health insurance market in China was around 27%. But what you have to remember is this is growth from a small base," says Brian Mi, General Manager in China for IMS Health, a medical market research company.

"It is only a tiny proportion of the population who have any kind of private cover, around 3.5% of the market spend on healthcare is paid for by private health insurance. Over 50% of all healthcare in China is paid for by the consumers themselves."
Private policy potential

Although the number of people with private healthcare policies may be small now, insurance companies see China as a country with huge potential. But there are obstacles to the development of private health products.
Dr Feng Liu Dr Feng Liu: "Too much emphasis on selling the policies and not enough on processing the claims"

One of them is that in the Chinese health system, doctors get a small proportion of their salaries from the state, and have to raise the rest through their patients.

This means there is a high rate of drug over-prescription and diagnostic tests - the more you have, the more you pay. Some insurance companies are reluctant to get involved in a market where cost can be open-ended.

But Dr Feng Liu, the Chairman of the Financial Planning Standards Board of China, says insurance companies offering private health plans sometimes do not operate in the interests of their clients.

"People aren't used to buying health insurance, and sometimes companies encourage people to buy insurance they don't actually need. I think there is too much emphasis on selling the policies and not enough on processing the claims, which always seem to be delayed," he says.
'Healthy China'
Continue reading the main story
“Start Quote

In China people are very good at saving for emergencies. So the portion of their income they commit to insurance doesn't have to be so high”

End Quote Phuong Chung, Senior Vice-President, Manulife-Sinochem

The Chinese government is working on a whole raft of state health reforms. Its Healthy China programme was announced in 2008 with the aim of providing state health insurance for all of its 1.4 billion population by 2020.

Before China's economic reforms began in 1978, there used to be a system of near-universal government insurance cover.

With the move to a market economy, people paid much more for healthcare, one of the reasons why China became a great nation of savers. It is estimated people squirrel away more than 40% of their disposable earnings, some of which will be savings in case of a health emergency.

Although 90% of the population now have state health insurance, it only offers partial cover. Generally outpatient costs are not covered and only 60% of inpatient hospital bills are compensated.

To pay the excess, people use their savings or borrow from family. But there are also cases where families are plunged into poverty and desperation because they cannot afford health bills.

People may buy separate private health insurance to cover the excess cost of healthcare not covered by the state.
Critical illness

Polly Deng has bought a critical illness policy from private insurers, which covers killer diseases like cancer.

"Depending on the policy, critical illness cover costs around $500 a year. It's probably 20% of what people pay in Western countries, but healthcare is less costly in China," says Phuong Chung, Senior Vice-President at Manulife-Sinochem, an insurance company with nearly 15 years experience in the Chinese market.

"In the event of illness, it will pay out around twice your annual salary."

He says this kind of policy - together with life insurance - has become popular with middle class people.

"This reflects the fact that in China people are very good at saving for emergencies. Middle class people often have savings in their homes, bank deposits and equity investments. So the portion of their income they commit to insurance doesn't have to be so high."

With her level of health insurance cover, Polly Deng thinks she has made a good investment.

"When I was at the hospital visiting my mother, there were women there who had paid between 30,000 and 50,000RMB ($4,640 - 7,734) to get treatment for broken legs. It can be very expensive!"

"My friends don't really have a clue about insurance, and I don't want to push them into it. But one day they will understand that it's a good idea."

Saturday, July 16, 2011

The metrics are the message: how analytics is shaping social games

http://gizmodo.com/5821571/google-makes-reading-news-a-game

Google Makes Reading News a Game

Sam Biddle

Internet masterminds realized you can get people to do almost anything (like, say, share their banal restaurant visits with friends) if you make it a pseudo-game. Now Google's piling on! Reading news will let you collect badges and level up.

Google News' new badge system will ostensibly let you organize your browsing habits better, but I think it's safe to say that this is mostly about tapping into our perhaps primordial life-drive to rack up arbitrary points. To use Google's own example, frequent reading of basketball articles will earn you a 'Basketball' badge, which you can then level up with repeated reading.

There are more than 500 badges to unlock—presumably for extremely niche topics like sexy knitting and the Micronesian legislature—so you better get readin' if you want to visually dominate your friends in current events consumption. One caveat: you need Web History switched on, so privacy pros might not want to stitch these badges on.



http://www.guardian.co.uk/technology/gamesblog/2011/jul/14/social-gaming-metrics

Posted by Keith Stuart Thursday 14 July 2011 16.08 BST guardian.co.uk

The metrics are the message: how analytics is shaping social games

Facebook and online game developers need to watch their players closely to ensure success. The result is a new generation of companies dedicated to social gaming analytics. This is what your favourite freemium game knows about you

Picture this. You're deeply engaged in one of the many free-to-play adventure games available online, when you decide to buy a bigger sword. It could be that you made the tactical decision to extend your armoury, or that you panicked when you spotted a gigantic dragon lumbering in your direction; you might not even know why you did it. You just fancied a bigger sword. But that action took you into the barely two percent of free-to-play gamers who actually pay for content – and the game makers want to know why.

The freemium gaming business is expanding rapidly. We all know about the Facebook behemoth Zynga, which now claims over 250 million monthly players, and is valued at anywhere between $5-10bn. But online, there are dozens of global companies hawking a range of in-depth gaming experiences. There is the German publisher and portal operator, Bigpoint, which runs massively-multiplayer browser games like Legend: Legacy of the Dragons and Battlestar Galactica Online and claims over 150 million subscribers. There is Korean veteran Webzen, with its long-running fantasy role-player Mu Online, which alone boasts 56 million users. Beyond these giants, there are dozens of new freemium social, browser and smartphone games starting up every month, looking to gain a foothold in the densely crowded market.

And what the big players have learned is that coming up with a great game concept is only the beginning. A successful free-to-play game is all about inspection and iteration; it is about launching quietly, testing and fine-tuning the experience constantly, watching how players react, listening to feedback and re-building. This is how the likes of Popcap and Playfish arrived at super-addictive titles like Fifa Superstars and Plants vs Zombies. And now a whole new business is emerging to help developers understand their players.

Alan Miller has been in the games business for over 30 years. He was one of the first programmers to join Atari's home console business, and he later co-founded Activision, currently the world's largest third-party games publisher. For the last decade, though, he's been working in the online sector, including a stint working on advergames with fellow Atari legend David Crane. Earlier this year he joined GamesAnalytics, a new UK-based company specialising in the data-mining and monetisation of online games. It's a real-time service that continually monitors every player in any virtual world it's commissioned to work on. It's like CCTV constantly monitored by psychologists and statisticians.
Game Analytics Game Analytics gathers data on all aspects of players, including the basics: age, gender and location.

"Our objective generally is to increase monetisation and improve player satisfaction," he explains. "Usually, the publisher's objectives have to do with increasing revenues, but not always. Sometimes they want to increase the virality, the number of invites and notifications sent out from a game. Then we look at their data and we identify behaviour patterns. It allows the publisher to learn a lot more about their game than they thought they knew.

In his experience, it's rarely great big design errors that trip up growing freemium games – it's tiny, often over-looked alterations. "We're working with a big MMO at the moment. We studied the last five years of their operations and we noticed that there was a huge change in just one month in the retention rate of new players. It turned out the publisher had made just one change that caused the game to be less appealing for newcomers - they didn't even notice it; this is one of their worlds! So now they're trying to digest that information and work out what they did wrong."

The key message behind freemium analytics is that free-to-play game construction is similar to web design – the publisher needs to understand, and subtly guide, every aspect of the user journey through the experience. Whereas traditional games are about creating big macro-environments for player exploration, freemium is about micro-managing every step the player takes toward actually buying something.

"A developer can build 'funnels' that depict the player actions leading to a financial conversion like purchasing extra content or virtual merchandize," says Justin Johnson, CTO of Playmetrix, another British company specialising in game analytics. "It's then down to the developer to use this analysis to improve conversion by removing obstructions and bottlenecks that may be inherent in the design. For example, aspects of the game may be unclear or too difficult for newcomers, leading to early high attrition, which means they never reach the purchase step. Our system also tracks the amount of money that a player spends giving metrics called ARPU (Average Revenue Per User) and LTV (Lifetime Value). Simply put, the LTV of a player must exceed their cost of acquisition for the game to be profitable."

It's a strange business. In the free-to-play universe, every player action is a potential metric in a revenue model. In-game behaviour is an algorithm that needs to be unraveled and de-coded. Developers have to operate like a sort of secret police agency, effectively bugging players – the Playmetrix software allows them to embed 'call backs' into their game code that trigger when players do something of interest. This is all visualised via graphics and charts so activities become infographics. It sounds kind of sinister, but understanding every intricate player activity is what makes a game in this sector successful. With no financial outlay at the beginning, players have much less impetus to keep playing; hence, 'funnel analysis' – tracking a player from the moment they register, though their actions in the game, to their first purchase. At any moment along this journey they may quit out. Understanding this is the key to lowering the dreaded 'attrition rate' – the numbers of people leaving the game.

Another key element in the maths of social gaming is the DAU/MAU ratio. As Johnson explains, "most games have the base behaviours of attendance and engagement. For that, we track daily active uniques (DAU) and monthly active uniques (MAU). By taking a ratio using the two (DAU/MAU) we can determine the percentage of monthly users that play each day and also derive churn." Last year, Lisa Marino of Rockyou, gave a GDC talk entitled Monetizing Social Games in which she identified the minimum threshold for a successful DAU/MAU ratio as '.2'. That's game design reduced to a single figure.
Drakensang Developers of new freemium titles like Drakensang have to get difficulty levels just right, challenging players but not setting barriers too high, thereby discouraging payments.

But it's not just about data mining and statistics. Miller argues that good freemium analysts have to know about traditional game design too. "Let's say that in a certain MMO there are ten common behaviour patterns that ultimately result in players becoming paying customers. Clearly, our identification of these patterns won't account for every player, but they'll account for significant groups of players that went through the process. And we might discover that along the chain of events that lead to monetisation, the player has to, say, kill a dragon. Well, we can identify bottlenecks and say well maybe 89% of the players who tried to kill that dragon failed. Then we would advise the publisher to reduce the threshold. That's a combination of using the analytical tools coupled with our experience of game design to improve the situation for both the publishers and the players."

In short, the dragon fight might not necessarily have to be removed, or even made easier, but it needs to be sign-posted. Miller says the GamesAnalytics software can deliver in-game messages to players, such as hints, challenges, free goods or offers – all designed to motivate that individual to move along that pipeline to satisfaction and – in theory – revenue.

Johnson agrees that it's sometimes very slight in-game elements that can add to player churn. "The biggest mistakes are sometimes the most simple ones to make. For example it's easy to over complicate, over engineer or omit to tell the player what they're supposed to do next. Instructions may not be clear or a sense of progress, reward or player gratification may not come soon enough. It may be that a game's intro sequence is slightly too long which means a large number of new players get bored and turn to another game as a result. It's important to get the psychology and balance right so that the decision a player has to make in order to purchase is based on perception of value. If a game is confusing, incorrectly paced and doesn't have a clear sense of value in its monetisation plan then it's not going to do well."

Of course, GamesAnalytics and Playmetrix are not the only companies operating on this emerging sector – Kontagent is an analytics platform that provides data on social applications, including games, and Adobe's Omniture helps with tracking acquisitions. The big question, though, is why these companies exist at all. With player retention proving to be such a key role in social and online game design, shouldn't the publishers have their own analysts in-house?
FrontierVille Zynga's FrontierVille has used analytics to improve gameplay – and add more suggestive humour...

Zynga certainly does – its dedicated team makes use of what is apparently the largest data warehouse in the world to track the behaviour patterns of its players. Sometimes it simply learns from how its users communicate. At this year's SXSW festival in Austin, the company's chief designer Brian Reynolds told how a tutorial level for the Frontiervillegame involved finding a sheep; when players achieved it, they had the option to post a cartoon image of a girl milking a ewe – except many players were amused that the crude image made it look as though the some sort of inter-species sex act was going on. The amount of chat it generated turned out to be a brilliant viral marketing tool, so Zynga went ahead and loaded the game's FaceBook messages with innuendo, such as 'Bob has some serious sacks' and 'Margaret needs a few good screws'. It was enormously popular. So there is science, there is data-mining, and there is the knowledge that middle-aged gamers like dirty jokes.
Bejeweled Blitz Popcap is another proponent of analytics – but says brilliant game design is the key factor.

Popcap, the hugely successful developer of Bejeweled and Zuma, has its own system. As Bart Barden, the company's director of online business explains, "we have an analytics team in-house that studies player data from three perspectives. First we look at actual gameplay, which includes things like number of games played, average score, highest score and number of moves. Second, we analyse the social activity tied to the game, which includes how many times they share a score or power up on Facebook. And finally we look at monetisation, how people like to transact.

"Recently, in Bejeweled Blitz, we increased the frequency of players being able to get and receive special gems (like the Cat's Eye or Phoenix Prism) based on some data we saw surrounding sharing. We found that players were much more likely to share a special gem with their friends than any other event in the game so we responded to that. This change has resulted in better engagement, which is a win-win for both the player and PopCap."

Indeed, the company's mastery of the social gaming market has just led to its purchase by Electronic Arts for a whopping $750m.

So why isn't this sort of analysis part of every social or online game? Johnson reckons it's down to cost and time. "Some developers have rolled their own solution but they are generally ad-hoc and fairly crude. Producing meaningful analysis capabilities isn't trivial - segmentation, funnels and cohort analysis requires a lot of thought and before you know it you're dealing with scalability, uptime commitments and a whole host of technical and operational issues that are far removed from creatively delivering a game." He likens the use of third-party analytics software to licensing a 3D engine like Unreal – it's about freeing up time and resources within the development process.
Fifa Superstars EA bought social game developer Playfish in 2009 to gain a foothold in the FaceBook sector. Fifa Superstars was the successful result.

It remains an esoteric, slightly sinister area for traditional gamers, though. Indeed, old skool publishers such as Activision and EA have found it just as challenging to evolve their thinking from the retail model into the freemium arena. Which is why EA bought Playfish in a deal worth up to $400m and Disney laid out $760m on Playdom – it's just easier to buy that expertise in. It's going to become more important though. With more and more games adopting the freemium model, actually understanding what hooks gamers into patterns of playing, sharing and buying will be a vital element. Barden reckons that, right now, third –party analytics systems are most clearly optimised for social titles based around resource management, but says their use will become more common, especially for small and medium-sized developers.

"Additionally," he says, "as more social game platforms emerge, third-party analytical toolsets become more valuable to consolidate and aggregate data across multiple social networks and products. The overall importance and focus on analytics will continue to grow over the next two or three years but how individual companies choose to act on that intelligence may vary."

The important thing to think about is that, in the free-to-play era, marketing and game design are indivisible – they're essentially the same thing. Players have to be on-brand and on-message, they have to be active agents in the game world and the advertising of that world. And everyone has to be watched. Although, all the industry people we spoke to were keen to point out that brilliant design is still the leading factor. As Miller puts it, "It's important to realise, we're not selling accountancy software. You're creating an emotional experience, it's an art form, individual style is tremendously important."

Friday, July 15, 2011

Thursday, July 14, 2011

Soft Memory Device Opens Door To New Biocompatible Electronics

http://news.ncsu.edu/releases/wmsdickeysoftmemory/

For Immediate Release

Matt Shipman | News Services | 919.515.6386

Dr. Michael Dickey | 919.513.0273

Dr. Orlin Velev | 919.513.4318

Release Date: 07.14.2011
Filed under Releases

Researchers from North Carolina State University have developed a memory device that is soft and functions well in wet environments – opening the door to a new generation of biocompatible electronic devices.

“We’ve created a memory device with the physical properties of Jell-O,” says Dr. Michael Dickey, an assistant professor of chemical and biomolecular engineering at NC State and co-author of a paper describing the research.

Researchers have created a memory device with the physical properties of Jell-O, and that functions well in wet environments.

Conventional electronics are typically made of rigid, brittle materials and don’t function well in a wet environment. “Our memory device is soft and pliable, and functions extremely well in wet environments – similar to the human brain,” Dickey says.

Prototypes of the device have not yet been optimized to hold significant amounts of memory, but work well in environments that would be hostile to traditional electronics. The devices are made using a liquid alloy of gallium and indium metals set into water-based gels, similar to gels used in biological research.

The device’s ability to function in wet environments, and the biocompatibility of the gels, mean that this technology holds promise for interfacing electronics with biological systems – such as cells, enzymes or tissue. “These properties may be used for biological sensors or for medical monitoring,” Dickey says.

The device functions much like so-called “memristors,” which are vaunted as a possible next-generation memory technology. The individual components of the “mushy” memory device have two states: one that conducts electricity and one that does not. These two states can be used to represent the 1s and 0s used in binary language. Most conventional electronics use electrons to create these 1s and 0s in computer chips. The mushy memory device uses charged molecules called ions to do the same thing.

In each of the memory device’s circuits, the metal alloy is the circuit’s electrode and sits on either side of a conductive piece of gel. When the alloy electrode is exposed to a positive charge it creates an oxidized skin that makes it resistive to electricity. We’ll call that the 0. When the electrode is exposed to a negative charge, the oxidized skin disappears, and it becomes conducive to electricity. We’ll call that the 1.

Normally, whenever a negative charge is applied to one side of the electrode, the positive charge would move to the other side and create another oxidized skin – meaning the electrode would always be resistive. To solve that problem, the researchers “doped” one side of the gel slab with a polymer that prevents the formation of a stable oxidized skin. That way one electrode is always conducive – giving the device the 1s and 0s it needs for electronic memory.

The paper, “Towards All-Soft Matter Circuits: Prototypes of Quasi-Liquid Devices with Memristor Characteristics,” was published online July 4 by Advanced Materials. The paper was co-authored by NC State Ph.D. students Hyung-Jun Koo and Ju-Hee So, and NC State INVISTA Professor of Chemical and Biomolecular Engineering Orlin Velev. The research was supported by the National Science Foundation and the U.S. Department of Energy.

NC State’s Department of Chemical and Biomolecular Engineering is part of the university’s College of Engineering.

-shipman-

Note to Editors: The study abstract follows.

“Towards All-Soft Matter Circuits: Prototypes of Quasi-Liquid Devices with Memristor Characteristics”

Authors: Hyung-Jun Koo, Ju-Hee So, Michael D. Dickey and Orlin D. Velev, North Carolina State University

Published: Online July 4 in Advanced Materials

Abstract: We present a new class of electrically functional devices composed entirely of soft, liquid-based materials that display memristor-like characteristics. A memristor, or a “memory resistor”, is an electronic device that changes its resistive state depending on the current or voltage history through the device. Memristors may become the core of next generation memory devices because of their low energy consumption and high data density and performance. Since the concept of memristors was theorized in 1971, resistive switching memories have been fabricated from a variety of materials operating on magnetic, thermal, photonic, electronic and ionic mechanisms. Conventional memristive devices typically include metal-insulator-metal (M-I-M) junctions composed of rigid stacks of films fabricated by multiple vacuum-deposition steps, often at high temperature. The most common “insulator” materials in M-I-M memristor junctions are inorganic metal oxides such as TiO2 and NiO. Conducting pathways can form by current bias across such layers. Solid electrolytes between metal electrodes can also be used to create resistance switches (e.g., Ag/Ag2S/Pt), in which conductive metal filaments that bridge the two electrodes can be formed or annihilated on demand. Memristive circuits composed of organic materials have advantages over conventional metal oxides due to their ease of processing, lightweight, and lowcost. A variety of organic materials such as homogeneous polymers, small-molecule or nanoparticle doped polymers, and organic donor-acceptor complexes have been evaluated as components in memory switching devices.

NSF Funds $18.5 Million Effort to Create Mind-Machine Interface

http://scienceblog.com/46254/nsf-funds-18-5-million-effort-to-create-mind-machine-interface/

http://depts.washington.edu/sensmot/pmwiki/pmwiki.php?n=Main.MemberBiographies

The National Science Foundation today announced an $18.5 million grant to establish an Engineering Research Center for Sensorimotor Neural Engineering based at the University of Washington.

“The center will work on robotic devices that interact with, assist and understand the nervous system,” said director Yoky Matsuoka, a UW associate professor of computer science and engineering. “It will combine advances in robotics, neuroscience, electromechanical devices and computer science to restore or augment the body’s ability for sensation and movement.”

The center launches this month and will be based in Russell Hall on the UW’s Seattle campus. The grant is for five years of funding, with the possibility of renewal for another five years.

Partners are the Massachusetts Institute of Technology and San Diego State University. Also partnering are historically minority-serving institutions Spelman College and Morehouse College, both in Atlanta, and Southwestern College in Chula Vista, Calif. International partners are the University of British Columbia and the University of Tokyo.

Researchers will develop new technologies for amputees, people with spinal cord injuries and people with cerebral palsy, stroke, Parkinson’s disease or age-related neurological disorders.

“We already see chips that interface with neural systems and then stimulate the right muscles based on that information, and we have purely mechanical lower-limb prostheses that are fast enough to compete in the Olympics,” Matsuoka said. “Our center will use sensory and neural feedback to give these devices much more flexibility and control.”

A diverse group of faculty from the UW College of Engineering, UW College of Arts and Sciences and the UW Medical Center will be involved in the new center. Among them are Chet Moritz, who works on restoring movement to paralyzed limbs; Matsuoka, whose Neurobotics Laboratory works on the human-robot interface; Thomas Daniel and Kristi Morgansen, who study animals as the basis for new flying robots; and Jeffrey Ojemann, Rajesh Rao and Eberhard Fetz, who work to detect and interpret human brain signals.

Scientists at the UW and partner institutions will work to perform mathematical analysis of the body’s neural signals; design and test implanted and wearable prosthetic devices; and build new robotic systems.

The four new engineering research centers announced this month by the NSF have an increased focus on industry participation. This center’s 23 industry partners include Microsoft Corp., Intel Corp. and Lockheed Martin Corp.; smaller companies and startups such as Impinj Inc., NeuroSky Inc. and NeuroVista Corp.; as well as industry organizations and venture capitalists that will help turn ideas into products and companies.

Collaborators also include nonacademic research institutions such as the Allen Institute for Brain Science and the La Jolla Bioengineering Institute, and hospitals in Seattle and San Diego.

“The Engineering Research Center for Sensorimotor Neural Engineering will bring together university and industry researchers to establish Seattle as an education, research and commercial hub for ‘neurobotics,’” said Matt O’Donnell, the UW’s dean of engineering. “We have fantastic partners and a strong leadership team to accelerate innovations and help prepare students to advance the field.”

The majority of the funding will support undergraduate and graduate student research. Early systems might involve remote or wearable devices that help guide rehabilitation exercises to remap brain signals and restore motor control. Ultimately, researchers hope to develop implantable prosthetics that are controlled by brain signals and include sensors that shuttle information back to wearers so they can react to their environment – creating robotic systems that are truly integrated with the body’s nervous system.

“I think the really interesting development is literally where the silicon meets the collagen,” said Daniel, the center’s deputy director and a UW biology professor. “It remains an open challenge, one of the current problems in neural engineering.”

The other deputy directors are Kee Moon at SDSU and Joel Voldman at MIT.

All three schools will offer two new undergraduate courses, two new graduate courses and a graduate certificate program in neural engineering. The UW also will offer an interdisciplinary dual undergraduate degree in neuroscience and engineering, and an undergraduate minor in neural engineering.

As with all NSF-funded engineering research centers, this one has a mission to integrate research with education and community outreach. The center will work with school districts in Seattle and San Diego to develop neural robotics curriculum for middle school and high school students. It also will reach out to women, underrepresented minorities and people with disabilities.

“We’re excited to be building a pathway, starting from about middle school, for students to be exposed to research and to this topic,” Matsuoka said.

The UW is currently home to another major NSF-funded center, the Center on Materials & Devices for Information Technology Research, established in 2002 through a similar NSF program for science research.

Saturday, July 9, 2011

Online Social Security Statement in limbo as agency adjusts for future

http://www.networkworld.com/community/blog/online-social-security-statement-limbo-agency

GAO report outlines Social Security Administration’s online challenges as benefits debate simmers in background
By Layer 8 on Fri, 07/08/11 - 1:56pm.

While the debate over Social Security benefits is heating up in Congress one of the most basic ways everyone interacts with the agency - the yearly Social Security Statement - is in limbo as the agency struggles to move it online.

The Social Security Statement had been issued every year since 2000 to more than 150 million workers serving as the government's key way of communicating with workers about benefits, earnings records and how much retirement money they have. The statement is also a key tool for communicating with the public about the long-term financial challenges the Social Security system faces. However, whether you realize it or not, the SSA suspended mailings of the statement in March citing budgetary concerns.

More interesting news: US intelligence agency wants technology to predict the future from public events

A report out today from the Government Accountability Office said that while SSA's budgetary decision to suspend statement mailings will leave some Americans without a statement this year, it has also created the impetus for SSA to seek new and more cost-effective ways to distribute this information. That's where moving the statement online comes in. If SSA can assure the security of this sensitive information, this approach holds real promise: it can both meet the electronic demands of an increasingly Internet-literate population while providing flexibility for improved statement design, the GAO stated.

But there are problems of course. From the GAO report:

* Although SSA's first attempt to make the statement available online in 1997 was short-lived due to privacy concerns, SSA may now be better positioned to move forward with this approach, though it is unknown when the agency will be fully ready. SSA is developing a new electronic authentication system and a "MySocialSecurity" Web page to allow individuals to access personalized SSA information online. Officials report that both the authentication system and the "MySocialSecurity" Web page have already undergone initial testing to assess their feasibility and public opinion about such an approach.
* While the agency had not determined what information would initially be made available through this portal, when the Commissioner suspended mailings of the statement in March of this year, SSA decided that the statement would be the top priority. According to officials, both the authentication system and the statement page for "MySocialSecurity" are currently in the initial development phase, as staff build the prototypes. Once the prototypes are completed, SSA will conduct additional testing both internally and with the public, on an iterative basis, until the agency determines that both the authentication system and the statement page on "MySocialSecurity" provide sufficient safeguards and are user-friendly. SSA officials said that testing of the online statement page will begin in August, but they could not provide a date for when the authentication system testing will begin. Because officials do not know how long the testing phase will last, they could not provide a date for when the statement will be available to the public online.
* Although officials told the GAO that they plan to fully assess the portal's safeguards before moving ahead with the online statement, SSA's Inspector General recently expressed concerns about the agency's information technology systems, including service delivery. Specifically, in a recent report on SSA management challenges, while the Inspector General noted his support of SSA's decision to offer more services online to enhance customer service, he cautioned the agency to proceed carefully with this initiative, ensuring proper authentication controls are in place before full implementation.
* SSA has not yet considered how they will reach those who cannot or will not obtain the statement online, though at least some will not be able to read statements provided only in English. Through its own 2010 survey of statement recipients, SSA found that only 21% expressed a preference for receiving the statement electronically instead of by mail, including 8% who said they would prefer to receive the statement on request via e-mail and 13% who said they would prefer to obtain it online. These data suggest that SSA will need to employ a substantial public relations strategy to ensure workers are made aware of and encouraged to access the online statement.
* SSA officials also could not provide information on how they plan to address access issues related to the online statement. Although SSA currently has a pilot project underway that has made computer workstations available to the public in selected field offices, SSA officials have not yet determined how those could be used to access the portal and online statement. However, such use may be needed by individuals who do not otherwise have Internet access. In addition, key officials involved in the online statement project could not provide information on any other plans SSA is developing to address Internet access issues.
* While SSA officials reported that upcoming tests of the portal will focus on its user-friendliness, they do not have plans in place for publicizing the online statement. Specifically, the project lead for the online statement said that an internal work group is currently considering options for SSA's public roll-out of the online statement, but the agency has not yet developed a plan for carrying it out.

In the end, if reforms currently being debated in congress are enacted, educating the public about program changes and how they will affect benefits will likely be a high priority for SSA, and the statement is likely to be one of the agency's key mechanisms for accomplishing this goal.

Follow Michael Cooney on Twitter: nwwlayer8

Friday, July 8, 2011

The Birth of Optogenetics

http://the-scientist.com/2011/07/01/the-birth-of-optogenetics/

An account of the path to realizing tools for controlling brain circuits with light

By Edward S. Boyden | July 1, 2011


For a few years now, I’ve taught a course at MIT called “Principles of Neuroengineering.” The idea of the class is to get students thinking about how to create neurotechnology innovations—new inventions that can solve outstanding scientific questions or address unmet clinical needs. Designing neurotechnologies is difficult because of the complex properties of the brain: its inaccessibility, heterogeneity, fragility, anatomical richness, and high speed of operation. To illustrate the process, I decided to write a case study about the birth and development of an innovation with which I have been intimately involved: optogenetics—a toolset of genetically encoded molecules that, when targeted to specific neurons in the brain, allow the activity of those neurons to be driven or silenced by light.
A strategy: controlling the brain with light

As an undergraduate at MIT, I studied physics and electrical engineering and got a good deal of firsthand experience in designing methods to control complex systems. By the time I graduated, I had become quite interested in developing strategies for understanding and engineering the brain. After graduating in 1999, I traveled to Stanford to begin a PhD in neuroscience, setting up a home base in Richard Tsien’s lab. In my first year at Stanford I was fortunate enough to meet many nearby biologists willing to do collaborative experiments, ranging from attempting the assembly of complex neural circuits in vitro to behavioral experiments with rhesus macaques. For my thesis work, I joined the labs of Richard Tsien and of Jennifer Raymond in spring 2000, to study how neural circuits adapt in order to control movements of the body as the circumstances in the surrounding world change.

In parallel, I started thinking about new technologies for controlling the electrical activity of specific neuron types embedded within intact brain circuits. That spring, I discussed this problem—during brainstorming sessions that often ran late into the night—with Karl Deisseroth, then a Stanford MD-PhD student also doing research in Tsien’s lab. We started to think about delivering stretch-sensitive ion channels to specific neurons, and then tethering magnetic beads selectively to the channels, so that applying an appropriate magnetic field would result in the bead’s moving and opening the ion channel, thus activating the targeted neurons.

By late spring 2000, however, I had become fascinated by a simpler and potentially easier-to-implement approach: using naturally occurring microbial opsins, which would pump ions into or out of neurons in response to light. Opsins had been studied since the 1970s because of their fascinating biophysical properties, and for the evolutionary insights they offer into how life forms use light as an energy source or sensory cue.1 These membrane-spanning microbial molecules—proteins with seven helical domains—react to light by transporting ions across the lipid membranes of cells in which they are genetically expressed. (See the illustration above.) For this strategy to work, an opsin would have to be expressed in the neuron’s lipid membrane and, once in place, efficiently perform this ion-transport function. One reason for optimism was that bacteriorhodopsin had successfully been expressed in eukaryotic cell membranes—including those of yeast cells and frog oocytes—and had pumped ions in response to light in these heterologous expression systems. And in 1999, researchers had shown that, although many halorhodopsins might work best in the high salinity environments in which their host archaea naturally live (i.e., in very high chloride concentrations), a halorhodopsin from Natronomonas pharaonis (Halo/NpHR) functioned best at chloride levels comparable to those in the mammalian brain.2

Infographic: OPSINS: Tools of the Trade
View full size JPG | PDFLucy Reading-Ikkanda

I was intrigued by this, and in May 2000 I e-mailed the opsin pioneer Janos Lanyi, asking for a clone of the N. pharaonis halorhodopsin, for the purpose of actively controlling neurons with light. Janos kindly asked his collaborator Richard Needleman to send it to me. But the reality of graduate school was setting in: unfortunately, I had already left Stanford for the summer to take a neuroscience class at the Marine Biology Laboratory in Woods Hole. I asked Richard to send the clone to Karl. When I returned to Stanford in the fall, I was so busy learning all the skills I would need for my thesis work on motor control that the opsin project took a backseat for a while.
The channelrhodopsin collaboration

In 2002 a pioneering paper from the lab of Gero Miesenböck showed that genetic expression of a three-gene Drosophila phototransduction cascade in neurons allowed the neurons to be excited by light, and suggested that the ability to activate specific neurons with light could serve as a tool for analyzing neural circuits.3 But the light-driven currents mediated by this system were slow, and this technical issue may have been a factor that limited adoption of the tool.

This paper was fresh in my mind when, in fall 2003, Karl e-mailed me to express interest in revisiting the magnetic-bead stimulation idea as a potential project that we could pursue together later—when he had his own lab, and I had finished my PhD and could join his lab as a postdoc. Karl was then a postdoctoral researcher in Robert Malenka’s lab (also at Stanford), and I was about halfway through my PhD. We explored the magnetic-bead idea between October 2003 and February 2004. Around that time I read a just-published paper by Georg Nagel, Ernst Bamberg, Peter Hegemann, and colleagues, announcing the discovery of channelrhodopsin-2 (ChR2), a light-gated cation channel and noting that the protein could be used as a tool to depolarize cultured mammalian cells in response to light.4

In February 2004, I proposed to Karl that we contact Georg to see if they had constructs they were willing to distribute. Karl got in touch with Georg in March, obtained the construct, and inserted the gene into a neural expression vector. Georg had made several further advances by then: he had created fusion proteins of ChR2 and yellow fluorescent protein, in order to monitor ChR2 expression, and had also found a ChR2 mutant with improved kinetics. Furthermore, Georg commented that in cell culture, ChR2 appeared to require little or no chemical supplementation in order to operate (in microbial opsins, the chemical chromophore all-trans-retinal must be attached to the protein to serve as the light absorber; it appeared to exist at sufficient levels in cell culture).

Finally, we were getting the ball rolling on targetable control of specific neural types. Karl optimized the gene expression conditions, and found that neurons could indeed tolerate ChR2 expression. Throughout July, working in off-hours, I debugged the optics of the Tsien-lab rig that I had often used in the past. Late at night, around 1 a.m. on August 4, 2004, I went into the lab, put a dish of cultured neurons expressing ChR2 into the microscope, patch-clamped a glowing neuron, and triggered the program that I had written to pulse blue light at the neurons. To my amazement, the very first neuron I patched fired precise action potentials in response to blue light. That night I collected data that demonstrated all the core principles we would publish a year later in Nature Neuroscience, announcing that ChR2 could be used to depolarize neurons.5 During that long, exciting first night of experimentation in 2004, I determined that ChR2 was safely expressed and physiologically functional in neurons. The neurons tolerated expression levels of the protein that were high enough to mediate strong neural depolarizations. Even with brief pulses of blue light, lasting just a few milliseconds, the magnitude of expressed-ChR2 photocurrents was large enough to mediate single action potentials in neurons, thus enabling temporally precise driving of spike trains. Serendipity had struck—the molecule was good enough in its wild-type form to be used in neurons right away. I e-mailed Karl, “Tired, but excited.” He shot back, “This is great!!!!!”

Transitions and optical neural silencers

In January 2005, Karl finished his postdoc and became an assistant professor of bioengineering and psychiatry at Stanford. Feng Zhang, then a first-year graduate student in chemistry (and now an assistant professor at MIT and at the Broad Institute), joined Karl’s new lab, where he cloned ChR2 into a lentiviral vector, and produced lentivirus that greatly increased the reliability of ChR2 expression in neurons. I was still working on my PhD, and continued to perform ChR2 experiments in the Tsien lab. Indeed, about half the ChR2 experiments in our first optogenetics paper were done in Richard Tsien’s lab, and I owe him a debt of gratitude for providing an environment in which new ideas could be pursued. I regret that, in our first optogenetics paper, we did not acknowledge that many of the key experiments had been done there. When I started working in Karl’s lab in late March 2005, we carried out experiments to flesh out all the figures for our paper, which appeared in Nature Neuroscience in August 2005, a year after that exhilarating first discovery that the technique worked.

CHANNELRHODOPSINS IN ACTION
A neuron expresses the light-gated cation channel channelrhodopsin-2 (green dots on the cell body) in its cell membrane (1). The neuron is illuminated by a brief pulse of blue light a few milliseconds long, which opens the channelrhodopsin-2 molecules (2), allowing positively charged ions to enter the cells, and causing the neuron to fire an electrical pulse (3). A neural network containing different kinds of cells (pyramidal cell, basket cell, etc.), with the basket cells (small star-shaped cells) selectively sensitized to light activation. When blue light hits the neural network, the basket cells fire electrical pulses (white highlights), while the surrounding neurons are not directly affected by the light (4). The basket cells, once activated, can, however, modulate the activity in the rest of the network.
Watch Video MIT McGovern Institute, Julie Pryor, Charles Jennings, Sputnik Animation, Ed Boyden

Around that same time, Guoping Feng, then leading a lab at Duke University (and now a professor at MIT), began to make the first transgenic mice expressing ChR2 in neurons.6 Several other groups, including the Yawo, Herlitze, Landmesser, Nagel, Gottschalk, and Pan labs, rapidly published papers demonstrating the use of ChR2 in neurons in the months following.7,8,9,10 Clearly, the idea had been in the air, with many groups chasing the use of channelrhodopsin in neurons. These papers showed, among many other groundbreaking results, that no chemicals were needed to supplement ChR2 function in the living mammalian brain.

Almost immediately after I finished my PhD in October 2005, two months after our ChR2 paper came out, I began the faculty job search process. At the same time, I started a position as a postdoctoral researcher with Karl and with Mark Schnitzer at Stanford. The job-search process ended up consuming much of my time, and being on the road, I began doing bioengineering invention consulting in order to learn about other new technology areas that could be brought to bear on neuroscience. I accepted a faculty job offer from the MIT Media Lab in September 2006, and began the process of setting up a neuroengineering research group there.

Around that time, I began a collaboration with Xue Han, my then girlfriend (and a postdoctoral researcher in the lab of Richard Tsien), to revisit the original idea of using the N. pharaonis halorhodopsin to mediate optical neural silencing. Back in 2000, Karl and I had planned to pursue this jointly; there was now the potential for competition, since we were working separately. Xue and I ordered the gene to be synthesized in codon-optimized form by a DNA synthesis company, and, using the same Tsien-lab rig that had supported the channelrhodopsin paper, Xue acquired data showing that this halorhodopsin could indeed silence neural activity. Our paper11 appeared in the March 2007 issue of PLoS ONE; Karl’s group, working in parallel, published a paper in Nature a few weeks later, independently showing that this halorhodopsin could support light-driven silencing of neurons, and also including an impressive demonstration that it could be used to manipulate behavior in Caenorhabditis elegans.12 Later, both our groups teamed up to file a joint patent on the use of this halorhodopsin to silence neural activity. As a testament to the unanticipated side effects of following innovation where it leads you, Xue and I got married in 2009 (and she is now an assistant professor at Boston University).

I continued to survey a wide variety of microorganisms for better silencing opsins: the inexpensiveness of gene synthesis meant that it was possible to rapidly obtain genes codon-optimized for mammalian expression, and to screen them for new and interesting light-drivable neural functions. Brian Chow (now an assistant professor at the University of Pennsylvania) joined my lab at MIT as a postdoctoral researcher, and began collaborating with Xue. In 2008 they identified a new class of neural silencer, the archaerhodopsins, which were not only capable of high-amplitude neural silencing—the first such opsin that could support 100 percent shutdown of neurons in the awake, behaving animal—but also were capable of rapid recovery after having been illuminated for extended durations, unlike halorhodopsins, which took minutes to recover after long-duration illumination.13 Interestingly, the archaerhodopsins are light-driven outward pumps, similar to bacteriorhodopsin—they hyperpolarize neurons by pumping protons out of the cells. However, the resultant pH changes are as small as those produced by channelrhodopsins (which have proton conductances a million times greater than their sodium conductances), and well within the safe range of neuronal operation. Intriguingly, we discovered that the H. salinarum bacteriorhodopsin, the very first opsin characterized in the early 1970s, was able to mediate decent optical neural silencing, suggesting that perhaps opsins could have been applied to neuroscience decades ago.
Beyond luck: systematic discovery and engineering of optogenetic tools

An essential aspect of furthering this work is the free and open distribution of these optogenetic tools, even prior to publication. To facilitate teaching people how to use these tools, our lab regularly posts white papers on our website* with details on reagents and optical hardware (a complete optogenetics setup costs as little as a few thousand dollars for all required hardware and consumables), and we have also partnered with nonprofit organizations such as Addgene and the University of North Carolina Gene Therapy Center Vector Core to distribute DNA and viruses, respectively. We regularly host visitors to observe experiments being done in our lab, seeking to encourage the community building that has been central to the development of optogenetics from the beginning.

As a case study, the birth of optogenetics offers a number of interesting insights into the blend of factors that can lead to the creation of a neurotechnological innovation. The original optogenetic tools were identified partly through serendipity, guided by a multidisciplinary convergence and a neuroscience-driven knowledge of what might make a good tool. Clearly, the original serendipity that fostered the formation of this concept, and that accompanied the initial quick try to see if it would work in nerve cells, has now given way to the systematized luck of bioengineering, with its machines and algorithms designed to optimize the chances of finding something new. Many labs, driven by genomic mining and mutagenesis, are reporting the discovery of new opsins with improved light and color sensitivities and new ionic properties. It is to be hoped, of course, that as this systematized luck accelerates, we will stumble upon more innovations that can aid in dissecting the enormous complexity of the brain—beginning the cycle of invention again.
Putting the toolbox to work

These optogenetic tools are now in use by many hundreds of neuroscience and biology labs around the world. Opsins have been used to study how neurons contribute to information processing and behavior in organisms including C. elegans, Drosophila, zebrafish, mouse, rat, and nonhuman primate. Light sources such as conventional mercury and xenon lamps, light-emitting diodes, scanning lasers, femtosecond lasers, and other common microscopy equipment suffice for in vitro use.

In vivo mammalian use of these optogenetic reagents has been greatly facilitated by the availability of inexpensive lasers with optical-fiber outputs; the free end of the optical fiber is simply inserted into the brain of the live animal when needed,14 or coupled at the time of experimentation to an implanted optical fiber.

For mammalian systems, viruses bearing genes encoding for opsins have proven popular in experimental use, due to their ease of creation and use. These viruses achieve their specificity either by infecting only specific neurons, or by containing regulatory promoters that constrain opsin expression to certain kinds of neurons.

An increasing number of transgenic mouse lines are also now being created, in which an opsin is expressed in a given neuron type through transgenic methodologies. One popular hybrid strategy is to inject a virus that contains a Cre-activated genetic cassette encoding for the opsin into one of the burgeoning number of mice that express Cre recombinase in specific neuron types, so that the opsin will only be produced in Cre recombinase-expressing neurons. 15

In 2009, in collaboration with the labs of Robert Desimone and Ann Graybiel at MIT, we published the first use of channelrhodopsin-2 in the nonhuman primate brain, showing that it could safely and effectively mediate neuron type–specific activation in the rhesus macaque without provoking neuron death or functional immune reactions. 16 This paper opened up a possibility of translating the technique of optical neural stimulation into the clinic as a treatment modality, although clearly much more work is required to understand this potential application of optogenetics.

Edward Boyden leads the Synthetic Neurobiology Group at MIT, where he is the Benesse Career Development Professor and associate professor of biological engineering and brain and cognitive science at the MIT Media Lab and the MIT McGovern Institute.
References

1. D. Oesterhelt, W. Stoeckenius, “Rhodopsin-like protein from the purple membrane of Halobacterium halobium,” Nat New Biol, 233:149-52, 1971. ↩
2. D. Okuno et al., “Chloride concentration dependency of the electrogenic activity of halorhodopsin,” Biochemistry, 38:5422-29, 1999. ↩
3. B.V. Zemelman et al., “Selective photostimulation of genetically chARGed neurons,” Neuron, 33:15-22, 2002. ↩
4. G. Nagel et al., “Channelrhodopsin-2, a directly light-gated cation-selective membrane channel,” PNAS, 100:13940-45, 2003. ↩
5. E.S. Boyden et al., “Millisecond-timescale, genetically targeted optical control of neural activity,” Nat Neurosci, 8:1263-68, 2005. ↩
6. B.R. Arenkiel et al., “In vivo light-induced activation of neural circuitry in transgenic mice expressing channelrhodopsin-2,” Neuron, 54:205-18, 2007. ↩
7. T. Ishizuka et al., “Kinetic evaluation of photosensitivity in genetically engineered neurons expressing green algae light-gated channels,” Neurosci Res, 54:85-94, 2006. ↩
8. X. Li et al., “Fast noninvasive activation and inhibition of neural and network activity by vertebrate rhodopsin and green algae channelrhodopsin,” PNAS, 102:17816-21, 2005. ↩
9. G. Nagel et al., “Light activation of channelrhodopsin-2 in excitable cells of Caenorhabditis elegans triggers rapid behavioral responses,” Curr Biol, 15:2279-84, 2005. ↩
10. A. Bi et al., “Ectopic expression of a microbial-type rhodopsin restores visual responses in mice with photoreceptor degeneration,” Neuron, 50:23-33, 2006. ↩
11. X. Han, E.S. Boyden, “Multiple-color optical activation, silencing, and desynchronization of neural activity, with single-spike temporal resolution,” PLoS ONE, 2:e299, 2007. ↩
12. F. Zhang et al., “Multimodal fast optical interrogation of neural circuitry,” Nature, 446:633-39, 2007. ↩
13. B.Y. Chow et al., “High-performance genetically targetable optical neural silencing by light-driven proton pumps,” Nature, 463:98-102, 2010. ↩
14. A.M. Aravanis et al., “An optical neural interface: in vivo control of rodent motor cortex with integrated fiberoptic and optogenetic technology,” J Neural Eng, 4:S143-56, 2007. ↩
15. D.Atasoy et al., “A FLEX switch targets Channelrhodopsin-2 to multiple cell types for imaging and long-range circuit mapping,” J Neurosci, 28:7025-30, 2008. ↩
16. X. Han et al., “Millisecond-Timescale Optical Control of Neural Dynamics in the Nonhuman Primate Brain,” Neuron, 62:191-98, 2009. ↩

Wednesday, July 6, 2011

Bionic glasses for poor vision

http://www.ox.ac.uk/media/science_blog/110705.html

Jonathan Wood | 05 Jul 11

A set of glasses packed with technology normally seen in smartphones and games consoles is the main draw at one of the featured stands at this year’s Royal Society Summer Science Exhibition.

But the exhibit isn’t about the latest gadget must-have, it’s all about aiding those with poor vision and giving them greater independence.

‘We want to be able to enhance vision in those who’ve lost it or who have little left or almost none,’ explains Dr Stephen Hicks of the Department of Clinical Neurology at Oxford University. ‘The glasses should allow people to be more independent – finding their own directions and signposts, and spotting warning signals,’ he says.

Technology developed for mobile phones and computer gaming – such as video cameras, position detectors, face recognition and tracking software, and depth sensors – is now readily and cheaply available. So Oxford researchers have been looking at ways that this technology can be combined into a normal-looking pair of glasses to help those who might have just a small area of vision left, have cloudy or blurry vision, or can’t process detailed images.

The glasses should be appropriate for common types of visual impairment such as age-related macular degeneration and diabetic retinopathy. NHS Choices estimates around 30% of people who are over 75 have early signs of age-related macular degeneration, and about 7% have more advanced forms.

‘The types of poor vision we are talking about are where you might be able to see your own hand moving in front of you, but you can’t define the fingers,’ explains Stephen.

The glasses have video cameras mounted at the corners to capture what the wearer is looking at, while a display of tiny lights embedded in the see-through lenses of the glasses feed back extra information about objects, people or obstacles in view.

In between, a smartphone-type computer running in your pocket recognises objects in the video image or tracks where a person is, driving the lights in the display in real time.

The extra information the glasses display about their surroundings should allow people to navigate round a room, pick out the most relevant things and locate objects placed nearby.

‘The glasses must look discrete, allow eye contact between people and present a simplified image to people with poor vision, to help them maintain independence in life,’ says Stephen. These guiding principles are important for coming up with an aid that is acceptable for people to wear in public, with eye contact being so important in social relationships, he explains.

The see-through display means other people can see you, while different light colours might allow different types of information to be fed back to the wearer, Stephen says. You could have different colours for people, or important objects, and brightness could tell you how near things were.

Stephen even suggests it may be possible for the technology to read back newspaper headlines. He says something called optical character recognition is coming on, so it possible to foresee a computer distinguishing headlines from a video image then have these read back to the wearer through earphones coming with the glasses. A whole stream of such ideas and uses are possible, he suggests. There are barcode readers in some mobile phones that download the prices of products; such barcode and price tag readers could also be useful additions to the glasses.

Stephen believes these hi-tech glasses can be realised for similar costs as smartphones – around £500. For comparison, a guide dog costs around £25-30,000 to train, he estimates.

He adds that people will have to get used to the extra information relayed on the glasses’ display, but that it might be similar to physiotherapy – the glasses will need to be tailored for individuals, their vision and their needs, and it will take a bit of time and practise to start seeing the benefits.

The exhibit at the Royal Society will take visitors through how the technology will work. ‘The primary aim is to simulate the experience of a visual prosthetic to give people an idea of what can be seen and how it might look,’ Stephen says.

A giant screen with video images of the exhibition floor itself will show people-tracking and depth perception at work. Another screen will invite visitors to see how good they are at navigating with this information. A small display added to the lenses of ski goggles should give people sufficient information to find their way round a set of tasks. An early prototype of a transparent LED array for the eventual glasses will also be on display.

All of this is very much at an early stage. The group is still assembling prototypes of their glasses. But as well as being one of the featured stands at the Royal Society’s exhibition, they have funding from the National Institute of Health Research to do a year-long feasibility study and plan to try out early systems with a few people in their own homes later this year.

The Royal Society’s Summer Science Exhibition begins today and runs all week until Sunday 10 July. It includes 20 exhibits showing some of the latest UK science that is changing our world and gives the chance to talk to and question the researchers involved.

Image courtesy of Dr Stephen Hicks.

Monday, July 4, 2011

Japan finds rare earths in Pacific seabed

http://www.bbc.co.uk/news/world-asia-pacific-14009910

Japanese researchers say they have discovered vast deposits of rare earth minerals, used in many hi-tech appliances, in the seabed.

The geologists estimate that there are about a 100bn tons of the rare elements in the mud of the Pacific Ocean floor.

At present, China produces 97% of the world's rare earth metals.

Analysts say the Pacific discovery could challenge China's dominance, if recovering the minerals from the seabed proves commercially viable.

The British journal Nature Geoscience reported that a team of scientists led by Yasuhiro Kato, an associate professor of earth science at the University of Tokyo, found the minerals in sea mud at 78 locations.

"The deposits have a heavy concentration of rare earths. Just one square kilometre (0.4 square mile) of deposits will be able to provide one-fifth of the current global annual consumption," said Yasuhiro Kato, an associate professor of earth science at the University of Tokyo.

The minerals were found at depths of 3,500 to 6,000 metres (11,500-20,000 ft) below the ocean surface.
Environmental fears

One-third of the sites yielded rich contents of rare earths and the metal yttrium, Mr Kato said.

The deposits are in international waters east and west of Hawaii, and east of Tahiti in French Polynesia.

Why rare earths are so important to the world's economy

Mr Kato estimated that rare earths contained in the deposits amounted to 80 to 100 billion tonnes.

The US Geological Survey has estimated that global reserves are just 110 million tonnes, found mainly in China, Russia and other former Soviet countries, and the United States.

China's apparent monopoly of rare earth production enabled it to restrain supply last year during a territorial dispute with Japan.

Japan has since sought new sources of the rare earth minerals.

The Malaysian government is considering whether to allow the construction of an Australian-financed project to mine rare earths, in the face of local opposition focused on the fear of radioactive waste.

The number of firms seeking licences to dig through the Pacific Ocean floor is growing rapidly.

The listed mining company Nautilus has the first licence to mine the floor of the Bismarck and Solomon oceans around Papua New Guinea.

It will be recovering what is called seafloor massive sulphide, for its copper and gold content.

The prospect of deep sea mining for precious metals - and the damage that could do to marine ecosystems - is worrying environmentalists.

Chromeless: Build your own Browser UI using HTML, CSS and JS

http://mozillalabs.com/chromeless

http://mozillalabs.com/chromeless/2010/10/21/chromeless-build-your-own-browser-ui-using-html-css-js/

The “Chromeless” project experiments with the idea of removing the current browser user interface and replacing it with a flexible platform which allows for the creation of new browser UI using standard Web technologies such as HTML, CSS and JavaScript.
Introduction

Have you ever had an idea to improve the user interface of your browser? Have you ever actually gone and tried to make that idea a reality? If you have, you would have probably used technologies like XUL and XPCOM. Much of the user interface (browser chrome) of Firefox is implemented in XUL, which uses a lot of Web-based technologies such as the DOM and JavaScript. Firefox is put together in a way that seasoned developers are able implement features with amazing efficiency, but at the same time, the browser interface in XUL represents a barrier for potential contributors. What if the parts of the browser that are most interesting to contributors were implemented in standard Web technologies such as HTML, CSS and JavaScript? What kinds of wild-eyed experimentation would we see if a new conception of browser UI could be prototyped in about the same time it takes to write a web page?

It’s questions like these that have motivated us to start a new Mozilla Labs experiment, codenamed “chromeless”. We intend to create an experimental toolkit which will allow developers to build their own Web browser using standard Web technologies: HTML, CSS, and JavaScript. The following screenshot is an example of a very simple browser application with page thumbnails used for tab handlers:

Overview

This is a functional application written in HTML running on a pre-alpha version of the chromeless platform: the inner browser elements are iframes instead of XUL browser elements. It serves to illustrate the general idea of the project and does not yet provide proper sandboxing (among other things) — and plenty of details on the state of implementation are available in the form of annotations in the source code.

The current implementation is a remix of Atul Varma’s Cuddlefish Lab and the Jetpack SDK, combined with XULRunner. Leveraging these existing Mozilla technologies made it possible for us to quickly get to a point where we could launch a XULRunner based application that is a blank canvas — chromeless.

Instead of loading XUL, the application’s main execution point is an HTML file. This page is granted extra privileges (i.e., it can access CommonJS modules made available by the Jetpack platform). Our goal is to expose the basic functionality required to write a browser to this HTML entry point, via CommonJS modules and as lightweight conventions on top of the DOM. For example, the HTML author might interact with an “application” interface module in order to set the labels and handlers for OS-specific window menus, or to invoke an OS-specific notification mechanism. The title of the HTML document might be the name of the running process. The height and width of the document may be linked to the size of the main application’s window. The following diagram shows a high level visualization of this “chromeless toolkit”

Where are We Now?

Currently we have a functional pre-alpha prototype that is capable of loading an HTML page and rendering browser UI. In the coming months we will add specific APIs to allow for more meaningful browser construction. We’ll investigate how we can integrate security features to keep Web content in a minimally privileged sandbox. Finally, we aim to wrap this exploration up into an accessible SDK to make it easy to get started with remixing the browser.
Get Involved

If you want to experiment with this project, please note that its current state is experimental and that things are often changing. You can get the source code and instructions at http://github.com/mozilla/chromeless. Your input is much appreciated – please leave your feedback here, join the Mozilla Labs Group or get in touch with us at #labs on irc.mozilla.org. And stay tuned – we will share more as the project unfolds.

http://ask.slashdot.org/story/11/07/23/2019249/Ask-Slashdot-Chromeless-Cross-Platform-Browser

Google’s Six-Front War

http://techcrunch.com/2011/07/03/google-six-front-war/

Semil Shah
20 hours ago

Editor’s note: Guest contributor Semil Shah is an entrepreneur interested in digital media, consumer internet, and social networks. He is based in Palo Alto and you can follow him on twitter @semilshah.

While the tech world is buzzing about the launch and implications of Google’s new social network, Google+, it’s worth noting that Google isn’t just in a war with Facebook, it’s at war with multiple companies across multiple industries. In fact, Google is fighting a multi-front war with a host of tech giants for control over some of the most valuable pieces of real estate in technology. Whether it’s social, mobile, browsing, local, enterprise, or even search, Google is being attacked from all angles. And make no mistake about it, they are fighting back and fighting back, hard. Entrepreneur-turned-venture capitalist Ben Horowitz laid the groundwork for this in his post Peacetime CEO / Wartime CEO, saying Larry Page “seems to have determined that Google is moving into war and he clearly intends to be a wartime CEO. This will be a profound change for Google and the entire high-tech industry.” Horowitz is exactly right.

Before I investigate each battle front in the war, it’s important to highlight the fact that perhaps no other tech company right now could withstand such a multifaceted attack, let alone be able to retaliate efficiently. Sure, Apple might get pushed around by Facebook, so it integrated Twitter into iOS5, and sure, Amazon and Apple have their own tussles over digital media and payments, but at the end of the day, Google is in this unique and potentially highly vulnerable position that will test the company’s mettle and ability to not only reinvent itself, but also to perhaps strengthen its core. Let’s take a quick look into the GooglePlex, which may now resemble more of a military complex, plotting out strategies and tactics for this war. Google must battle on at least six fronts simultaneously.

The Browser Front: Users have a choice between Internet Explorer (Microsoft), Firefox (Mozilla), Safari (Apple), and Google’s offering, Chrome. The speculation is that Facebook is interested in a browser, too, since Mozilla co-founder Blake Ross is an employee, but that hasn’t happened yet. More recently, the social browser RockMelt has captured some peoples’ interests, and last week secured $30M in financing, adding Facebook board members Jim Breyer and Marc Andreessen to its board. Andreessen obviously knows a thing or two about browsers. Though most browsers enable users to power their search by Google as an option, Googe’s Chrome offering isn’t the lead browser by market share, and not even in second place.

The Mobile Front: Apple’s iOS took the mobile world by storm in 2007 with the first iPhone. Then Google’s Android operating system roared alongside it, turning into a freight train of downloads, as Bill Gurley said, only recently to be slowed by Apple’s release of a phone with Verizon. While Android may have more installs, they don’t have the developer community to build killer apps because the Android marketplace (both for hardware and firmware) is highly fragmented, whereas iOS is about symphonic convergence. All the along, there’s been ample speculation about whether Facebook was building its own mobile phone device, or as the company has publicly hinted, how it would integrate social layers into different mobile operating systems and platforms.

The Search Front: Whether we’re on the desktop/laptop, a tablet, or a phone, Google wants to be powering our search, and this is where they dominate, though Microsoft’s Bing has been able to acquire an impressive number of clicks. While everything is fine today, there are some troubling warning signs. On desktops and laptops, people will continue to use a variety of browsers, though they end up spending a lot of time on Facebook, which scares Google because of the trend of people moving slowly from search to discovery. This, however, won’t shift overnight. For mobile devices, it’s trickier. Most iOS users navigate the web either through Apple’s own browser, Safari, and can have it search by Google. On Android-powered tablets and phones, Google controls more of the user-experience, including search, navigation, and application integration. While this is going on, users are trying their hand at realtime search on Twitter or BackType, looking for content directly within Quora, or using Blekko’s hashtags to better cut through and sort the web.

The Local Front: When users search for things on Google and click through, Google gets a little cut of that click. It knows how to drive traffic online and be paid handsomely for it. Driving and directing traffic that originates online into the real world, however, is a different story. As Steve Cheney elegantly stated, when we search online for places to go and then end up there in real life, the place itself does not have a clear sense of what drove them there. This is why the Daily Deals space is so red-hot and competitive, as it helps to close this major, valuable loop. If you search for a restaurant via OpenTable and make a reservation, the merchant knows exactly what drove you to the door. That’s why Yelp, which only used to provide reviews, offered the ability to check-in for credit after Foursquare built up a head of steam. The opportunity here is so complex yet fragmented that it drove Google to offer $6B for Groupon just six months ago. In local, Google is competing against Groupon, but also Amazon (which has a stake in LivingSocial), and a host of smaller (Loopt) and forthcoming deals companies will continue to roll out. This is just the beginning.

The Social Front: Yes, again, Google is fighting a war with Facebook. That much is obvious. What’s less obvious is how other social networks have been able to capture bits and pieces of our identities, leaving Google without any information of who we are. Users have been pumping personal content into blogs like Tumblr, networks like LinkedIn, and even asking search-related questions on Quora. Although we may all predominantly search via Google, the company is struggling in the social field. That is why Larry Page stepped in as CEO, why he tied bonuses to social, and why Google+ is their social sword and shield to fight back and capture user data, despite it being late in the game. Strategically speaking, even if Google+ doesn’t hold or catch fire, it will probably cause its rivals to pause for a moment and consider a range of short- and long-term implications.

The Enterprise Front: If you think the browser, mobile, social, local, and search isn’t enough, check out Google’s combatants in enterprise—just some names like Microsoft, Oracle, IBM, and VMware, among others. Google’s App Engine could go up against AWS, though that doesn’t seem likely. Google competes with IBM and Oracle on enterprise search (such as OmniFind) and email and work collaboration tools (Lotus). Google’s Chromebooks are seen as a potential entry point into enterprise computing, going up against hardware giants like HP, Dell, and Lenovo. Furthermore, Google may be trying to push Android into the enterprise, which would apply even more pressure on Research in Motion. There’s VMware, which offers Zimbra, PaaS, and presentation tools, to name a few. And, of course, there’s Microsoft, which competes with Google for a wide range of productivity applications. For all of Google’s consumer-facing brands and applications, its strength in enterprise sometimes is underestimated despite the fact that they currently hold many excellent positions.

It’s easy to pile on Google given their size, their wallet, and their global influence and impact. They are the goliath, and have been for many years, and are now facing many challenging tests, all at the same time. And while it’s a fun parlor game to sit around and pontificate about how Google’s reign might be over or how slow GMail loads, the reality is that no other company could compete legitimately on so many different battlefronts against so many different competitors. There’s no way Google can win each battle, and they must know that, but they will win some, and it will be fascinating to see how the company both adapts and stays the course along the way. Google is not going to go down without a fight, and it could take another decade for all of these battles to play out. The company has some of the world’s brightest engineers, a stockpile of cash, and incredible consumer Internet mind share, worldwide. Sit tight.