Tuesday, May 31, 2011

Pentagon Sets Stage for U.S. to Respond to Computer Sabotage With Military Force



WASHINGTON—The Pentagon has concluded that computer sabotage coming from another country can constitute an act of war, a finding that for the first time opens the door for the U.S. to respond using traditional military force.

The Pentagon's first formal cyber strategy, unclassified portions of which are expected to become public next month, represents an early attempt to grapple with a changing world in which a hacker could pose as significant a threat to U.S. nuclear reactors, subways or pipelines as a hostile country's military.

In part, the Pentagon intends its plan as a warning to potential adversaries of the consequences of attacking the U.S. in this way. "If you shut down our power grid, maybe we will put a missile down one of your smokestacks," said a military official.

Recent attacks on the Pentagon's own systems—as well as the sabotaging of Iran's nuclear program via the Stuxnet computer worm—have given new urgency to U.S. efforts to develop a more formalized approach to cyber attacks. A key moment occurred in 2008, when at least one U.S. military computer system was penetrated. This weekend Lockheed Martin, a major military contractor, acknowledged that it had been the victim of an infiltration, while playing down its impact.

The report will also spark a debate over a range of sensitive issues the Pentagon left unaddressed, including whether the U.S. can ever be certain about an attack's origin, and how to define when computer sabotage is serious enough to constitute an act of war. These questions have already been a topic of dispute within the military.

One idea gaining momentum at the Pentagon is the notion of "equivalence." If a cyber attack produces the death, damage, destruction or high-level disruption that a traditional military attack would cause, then it would be a candidate for a "use of force" consideration, which could merit retaliation.
The War on Cyber Attacks

Attacks of varying severity have rattled nations in recent years.

June 2009: First version of Stuxnet virus starts spreading, eventually sabotaging Iran's nuclear program. Some experts suspect it was an Israeli attempt, possibly with American help.

November 2008: A computer virus believed to have originated in Russia succeeds in penetrating at least one classified U.S. military computer network.

August 2008: Online attack on websites of Georgian government agencies and financial institutions at start of brief war between Russia and Georgia.

May 2007: Attack on Estonian banking and government websites occurs that is similar to the later one in Georgia but has greater impact because Estonia is more dependent on online banking.

The Pentagon's document runs about 30 pages in its classified version and 12 pages in the unclassified one. It concludes that the Laws of Armed Conflict—derived from various treaties and customs that, over the years, have come to guide the conduct of war and proportionality of response—apply in cyberspace as in traditional warfare, according to three defense officials who have read the document. The document goes on to describe the Defense Department's dependence on information technology and why it must forge partnerships with other nations and private industry to protect infrastructure.

The strategy will also state the importance of synchronizing U.S. cyber-war doctrine with that of its allies, and will set out principles for new security policies. The North Atlantic Treaty Organization took an initial step last year when it decided that, in the event of a cyber attack on an ally, it would convene a group to "consult together" on the attacks, but they wouldn't be required to help each other respond. The group hasn't yet met to confer on a cyber incident.

Pentagon officials believe the most-sophisticated computer attacks require the resources of a government. For instance, the weapons used in a major technological assault, such as taking down a power grid, would likely have been developed with state support, Pentagon officials say.

The move to formalize the Pentagon's thinking was borne of the military's realization the U.S. has been slow to build up defenses against these kinds of attacks, even as civilian and military infrastructure has grown more dependent on the Internet. The military established a new command last year, headed by the director of the National Security Agency, to consolidate military network security and attack efforts.

The Pentagon itself was rattled by the 2008 attack, a breach significant enough that the Chairman of the Joint Chiefs briefed then-President George W. Bush. At the time, Pentagon officials said they believed the attack originated in Russia, although didn't say whether they believed the attacks were connected to the government. Russia has denied involvement.

The Rules of Armed Conflict that guide traditional wars are derived from a series of international treaties, such as the Geneva Conventions, as well as practices that the U.S. and other nations consider customary international law. But cyber warfare isn't covered by existing treaties. So military officials say they want to seek a consensus among allies about how to proceed.

"Act of war" is a political phrase, not a legal term, said Charles Dunlap, a retired Air Force Major General and professor at Duke University law school. Gen. Dunlap argues cyber attacks that have a violent effect are the legal equivalent of armed attacks, or what the military calls a "use of force."

"A cyber attack is governed by basically the same rules as any other kind of attack if the effects of it are essentially the same," Gen. Dunlap said Monday. The U.S. would need to show that the cyber weapon used had an effect that was the equivalent of a conventional attack.

James Lewis, a computer-security specialist at the Center for Strategic and International Studies who has advised the Obama administration, said Pentagon officials are currently figuring out what kind of cyber attack would constitute a use of force. Many military planners believe the trigger for retaliation should be the amount of damage—actual or attempted—caused by the attack.

For instance, if computer sabotage shut down as much commerce as would a naval blockade, it could be considered an act of war that justifies retaliation, Mr. Lewis said. Gauges would include "death, damage, destruction or a high level of disruption" he said.

Culpability, military planners argue in internal Pentagon debates, depends on the degree to which the attack, or the weapons themselves, can be linked to a foreign government. That's a tricky prospect at the best of times.

The brief 2008 war between Russia and Georgia included a cyber attack that disrupted the websites of Georgian government agencies and financial institutions. The damage wasn't permanent but did disrupt communication early in the war.

A subsequent NATO study said it was too hard to apply the laws of armed conflict to that cyber attack because both the perpetrator and impact were unclear. At the time, Georgia blamed its neighbor, Russia, which denied any involvement.

Much also remains unknown about one of the best-known cyber weapons, the Stuxnet computer virus that sabotaged some of Iran's nuclear centrifuges. While some experts suspect it was an Israeli attack, because of coding characteristics, possibly with American assistance, that hasn't been proven. Iran was the location of only 60% of the infections, according to a study by the computer security firm Symantec. Other locations included Indonesia, India, Pakistan and the U.S.

Officials from Israel and the U.S. have declined to comment on the allegations.

Defense officials refuse to discuss potential cyber adversaries, although military and intelligence officials say they have identified previous attacks originating in Russia and China. A 2009 government-sponsored report from the U.S.-China Economic and Security Review Commission said that China's People's Liberation Army has its own computer warriors, the equivalent of the American National Security Agency.

That's why military planners believe the best way to deter major attacks is to hold countries that build cyber weapons responsible for their use. A parallel, outside experts say, is the George W. Bush administration's policy of holding foreign governments accountable for harboring terrorist organizations, a policy that led to the U.S. military campaign to oust the Taliban from power in Afghanistan.

Write to Siobhan Gorman at siobhan.gorman@wsj.com

Read more: http://online.wsj.com/article/SB10001424052702304563104576355623135782718.html#ixzz1Nvv7Vae6

Monday, May 30, 2011

China's Blue Army of 30 computer experts could deploy cyber warfare on foreign powers

A report from US anti-virus software maker Symantec last year found that almost 30 percent of so-called malicious emails were sent from China, with 21.3 percent of the attacks originating from the eastern city of Shaoxing.

Read more: http://www.foxnews.com/scitech/2011/05/26/china-confirms-existence-blue-army-elite-cyber-warfare-outfit/#ixzz1Nqk5BBTs


CHINA has admitted for the first time that it had poured massive investment into the formation of a 30-strong commando unit of cyberwarriors - a team supposedly trained to protect the People's Liberation Army from outside assault on its networks.

While the unit, known as the "Blue Army", is nominally defensive, the revelation is likely to confirm the worst fears of governments across the globe who already suspect that their systems and secrets may come under regular and co-ordinated Chinese cyberattack.

In a chilling reminder of China's potential cyberwarfare capabilities, a former PLA general told The Times that the unit had been drawn from an exceptionally deep talent pool.

"It is just like ping-pong. We have more people playing it, so we are very good at it," he said.

The Blue Army, which comprises a few dozen of the best talents China has to offer, are understood to have been drawn from various channels, including existing PLA soldiers, officers, college students and assorted "members of society".

Confirmation of the existence of the Blue Army came during a rare briefing by the Chinese Defence Ministry whose spokesman, Geng Yansheng, said that the unit's purpose was to improve the security of the country's military forces.

Organised under the Guangdong Military Command, the Blue Army is understood to have existed formally for about two years, but had been discussed within the PLA for more than a decade. A report in the official PLA newspaper said that "tens of millions" had been spent on the country's first senior-level military training network.

Xu Guangyu, a senior researcher of the government-backed China Arms Control and Disarmament Association, described the existence of the Blue Army as a great step forward for the PLA and said that China could not afford to allow "blank spaces" to open up in state and military security.

"The internet has no boundaries, so we can't say which country or organisation will be our enemy and who will attack us. The Blue Army's main target is self-defence. We won't initiate an attack on anyone," he said.

In a comment that many foreign governments will argue dramatically understates the true balance of cyberwar capabilities, Mr Xu added: "I don't think our Blue Army's skills are too backward compared to those of other countries."

In a recent test of its powers, reported the PLA Daily, the Blue Army was thrust into a simulated cyberbattle against an attacking force four times its size and left to defend China's military networks against a bombardment of virus attacks, massive barrages of junk mail and stealth missions into the inner sanctums of military planning to steal secret information on troop deployment. The Blue Army, predictably, triumphed.

Asked whether the unit had been set up specifically to mount cyberattacks on foreign countries, Mr Geng said that internet security had become an international issue with an impact on the military field of battle. China, he added, was also a victim and its abilities to protect itself from cyberattack were very weak.

Even without the PLA's acknowledgement of the existence of the Blue Army, sources throughout the internet security industry have long believed that Chinese-based hackers are the single largest source of worldwide cyberattacks.

A report on cyberespionage last year by the US anti-virus software maker Symantec found that more than a quarter of all attempts to steal sensitive corporate data originated in China and that the eastern city of Shaoxing was the single largest generator of attacks. Western intelligence sources believe that many Chinese-originated attacks are carried out by hackers with links to the PLA or the Chinese Government.

Sunday, May 29, 2011

VLT (Very Large Telescope) HD Timelapse Footage



ALL IMAGES: (eso.org) taken on location by Stephane Guisard and Jose Francisco Salgado.

ESO/S. Guisard (http://www.eso.org/~sguisard)

ESO/José Francisco Salgado (http://www.josefrancisco.org)

MUSIC SCORE: "We Happy Few" - The Calm Blue Sea (2008)

EDITION: Nicolas Bustos

Saturday, May 28, 2011



(play /əˈpɒstəsi/; Greek: ἀποστασία (apostasia), a defection or revolt, from ἀπό, apo, "away, apart", στάσις, stasis, "stand", "standing") is the formal disaffiliation from or abandonment or renunciation of a religion by a person.

One who commits apostasy apostatizes and is an apostate. These terms have a pejorative implication in everyday use. The term is used by sociologists to mean renunciation and criticism of, or opposition to, a person's former religion, in a technical sense and without pejorative connotation.

The term is sometimes also used by extension to refer to renunciation of a non-religious belief or cause, such as a political party, brain trust, or, facetiously, a sports team.

International law

The United Nations Commission on Human Rights, considers the recanting of a person's religion a human right legally protected by the International Covenant on Civil and Political Rights:

The Committee observes that the freedom to 'have or to adopt' a religion or belief necessarily entails the freedom to choose a religion or belief, including the right to replace one's current religion or belief with another or to adopt atheistic views [...] Article 18.2[6] bars coercion that would impair the right to have or adopt a religion or belief, including the use of threat of physical force or penal sanctions to compel believers or non-believers to adhere to their religious beliefs and congregations, to recant their religion or belief or to convert.[7]

In many countries apostasy from the religion supported by the state is explicitly forbidden. This is largely the case in some states where Islam is the state religion; conversion to Islam is encouraged, conversion from Islam penalised.

* Iran – illegal (death penalty)[8][9][10]
* Egypt – illegal (death penalty)[10]
* Pakistan – illegal (death penalty[10] since 2007)
* United Arab Emirates – illegal (death penalty)[11]
* Somalia – illegal (death penalty)[12]
* Afghanistan – illegal (death penalty, although the U.S. and other coalition members have put pressure that has prevented recent executions[13][14])
* Saudi Arabia – illegal (death penalty, although there have been no recently reported executions)[15][10]
* Sudan – illegal (death penalty, although there have only been recent reports of torture, and not of execution[16])
* Qatar – illegal (death penalty)[17]
* Yemen – illegal (death penalty) [17]
* Malaysia – illegal in five of 13 states (fine, imprisonment, and flogging)[18][19]
* Mauritania – illegal (death penalty)[citation needed]
* Nigeria – illegal in twelve of 37 states (death penalty)[citation needed]
* Syria – possibly illegal (death penalty) although there is evidence to the contrary[20]

But there are countries where it is different:

* Canada – legal (protected under Section Two of the Canadian Charter of Rights and Freedoms)
* Netherlands – legal (protected under Article Six of the Constitution of the Kingdom of the Netherlands)
* United States – legal (protected under the First Amendment to the United States Constitution)(No official state religion recognised).
* India – legal. The Constitution of India allows freedom of religion. In some provinces, including Gujarat and Tamil Nadu, there are laws forbidding conversion in response to the methods used by some sects and religions for mass conversion, and the social tensions caused by conversion under these circumstances.[21]
* Philippines – legal (protected under Article III, Section 5 of the Philippine Constitution)
* Brazil – legal
* Indonesia – legal (protected by the Constitution (UUD 1945, KUHP, Garuda Pancasila) [22]

first quantum computer sold to lockheed martin


D-Wave Systems sells its first Quantum Computing System to Lockheed Martin Corporation

May 25th, 2011

VANCOUVER, BC, MAY 25, 2011 - Lockheed Martin Corporation (NYSE: LMT) has entered into an agreement to purchase a quantum computing system from D-Wave Systems Inc.

Lockheed Martin and D-Wave will collaborate to realize the benefits of a computing platform based upon a quantum annealing processor, as applied to some of Lockheed Martin's most challenging computation problems. The multi-year contract includes a system, maintenance and associated professional services.

D-Wave develops computing systems that leverage the physics of quantum mechanics in order to address problems that are hard for traditional methods to solve in a cost-effective amount of time. Examples of such problems include software verification and validation, financial risk analysis, affinity mapping and sentiment analysis, object recognition in images, medical imaging classification, compressed sensing and bioinformatics. D-Wave develops an architecture that is optimized for working with such problems.

"D-Wave is thrilled to establish a strategic relationship with Lockheed Martin Corporation," said Vern Brownell, D-Wave's President and Chief Executive Officer. "Our combined strength will provide capacity for innovation needed to tackle important unresolved computational problems of today and tomorrow. Our relationship will allow us to significantly advance the potential of quantum computing."

D-Wave was featured May 11, 2011 in the prestigious British scientific journal Nature, where its research on quantum annealing was published.

Lockheed Martin is a global security company with headquarters in Bethesda, Md.

D-Wave's mission is to build quantum computing systems that help solve humanity's most challenging problems. It strives to use the deepest insights of physics and computer science to design new types of computers capable of taking on the world's hardest and most important challenges.

Working with Fortune 500 companies, governments and academia, D-Wave helps to craft solutions to problems where data volume and complexity are overwhelming. Applying D-Wave's unique quantum computing technology, the company aims to dramatically improve results through better understanding and insights.

Thursday, May 26, 2011

Google Wallet




Google wants to be your wallet
Cash is dead and credit cards are dying. Your new wallet is your smart phone. Get ready for a world where every transaction will be traceable.

By Dan Tynan
May 26, 2011, 2:26 PM

Attendees watch a demonstration of the Google wallet application screen during a news conference unveiling the mobile payment system.

Source: REUTERS/Shannon Stapleton

Like it or not, your smart phone is turning into your wallet. And if Google has anything to say about it, you’ll be spending gDollars on your gPhones.

Today, to the surprise of practically no one, Google announced Google Wallet, a mobile payments system using the Near Field Communications technology built into certain Android smart phones.

Soon you may be able to wander into your nearby Quickie Mart and wave your Android phone at the register to pay for that six pack of Modelo Especial and bag of Doritos Cool Ranch chips.

For now, though, the “tap-to-pay” system is taking baby steps. Google Wallet will only be available as a trial in New York and San Francisco, and will work only on Samsung Nexus S 4G phones at select retailers using MasterCard PayPass terminals.

But it’s inevitable. Everyone in the financial sector is working on frictionless forms of commerce, and the smart phone is the logical place to put them. You might as well mark today as the day cash died and traditional credit cards went on life support.

Why? Two reasons. One is that bits are much cheaper than molecules and easier to upgrade or replace. No cards to mail out, no postage, etc. So banks will be able to charge the same transactions fees while incurring much lower overhead. The second is the wealth of data that gets generated when you purchase something, much of which goes uncaptured or poorly used. Everybody wants at that, especially Google.

Let’s think about that purchase of Modelo and Doritos. Today, that is probably a straight cash transaction only you and the slacker behind the counter know about, and he’s probably too stoned to remember.

If you bought that with your smart phone, a whole mess of people will be able to remember it.

The store, for example, could aggregate that information to determine that a lot of people are buying Modelo and Doritos at the same time, and may display them closer together inside the store. Or it may determine the demand for Modelo and Doritos spikes after 11 pm and institute variable pricing, charging more for it in the wee hours than it does in the afternoon.

(It may also fire that stoner/slacker employee. In the new frictionless economy, we won't need cash register jockeys.)

Google could take that information and, via its new Google Offers service, send you a coupon for your next purchase. Or Google could use its AdMob subsidiary to send your phone ads for competing products like Corona and Fritos. (And Apple, which will surely offer NFC payments at some point for the iPhone and iPad, could do the same with its Quattro mobile ad service.)

This type of thing already happens in a limited way with supermarket frequent shopper cards; soon it will happen with everything.

Your bank could collect that purchase information and sell it to data brokers, who in turn could sell it to whomever might show an interest. Alcohol and high sodium foods are certainly not good for you -- that’s information your health care provider might be willing to pay for (kiss those “healthy eating” discounts goodbye). Wait, aren’t you in AA? Your estranged spouse’s attorney might be very curious as to what you’re about to do with that six pack. And so on.

I’m not saying all of this will happen. I’m just saying there’s nothing to stop it from happening. When all transactions are traceable – and cash payments become the kind of thing only the very poor or the very criminal rely on -- all kinds of things can happen to that data. Some will be highly convenient; others, not so much.

I won’t even get into the security issues this raises, though it’s pretty clear smart phones just became an even more tempting target for hackers.

But, like I said earlier, frictionless tap-to-pay systems are inevitable. Best to go into them with your eyes wide open.

TY4NS blogger Dan Tynan is now dreaming about chips and beer. Visit his eHumor site eSarcasm or follow him on Twitter: @tynan_on_tech.




Human brain's 'bat sight' found


The part of the brain used by people who can "see like a bat" has been identified by researchers in Canada.

Some blind people have learned to echolocate by making clicking noises and listening to the returning echoes.

A study of two such people, published in PLoS ONE, showed a part of the brain usually associated with sight was activated when listening to echoes.

Action for Blind People said further research could improve the way the technique is taught.

Bats and dolphins bounce sound waves off their surroundings and by listening to the echoes can "see" the world around them.
Continue reading the main story
“Start Quote

[They] use echolocation in a way that seems uncannily similar to vision”

End Quote Dr Lore Thaler University of Western Ontario

Some blind humans have also trained themselves to do this, allowing them to explore cities, cycle and play sports.
Brain scan

Researchers looked at two patients who use echolocation every day. EB, aged 43, was blinded at age 13 months. LB, 27, had been blind since age 14.

They were recorded echolocating, while microphones were attached to their ears.

The recordings were then played while their brain activity was being recorded in an fMRI machine.

Increased activity in the calcarine cortex was discovered.

Dr Lore Thaler, from University of Western Ontario, said: "This suggests that visual brain areas play an important role for echolocation in blind people."

The study looked at only two people so cannot say for certain what happens in the brains of all people who learn the technique, but the study concludes: "EB and LB use echolocation in a way that seems uncannily similar to vision."

Susie Roberts, rehabilitation officer at Action for Blind People, said: "This research into brain activity and echolocation is very interesting and improves our understanding of how some visually impaired people may be processing information to help them navigate safely.

"Further investigation may help to improve the way the technique is taught to people in the future, potentially improving their mobility and independence."

Gestural Interfaces: A Step Backwards In Usability


Note: This is to be published as part of my bi-monthly column in the ACM CHI magazine, Interactions. I urge you to read the entire magazine -- subscribe. it's a very important source of design information. see their website at interactions.acm.org. (ACM is the professional society for computer science. CHI = Computer-Human Interaction, but better thought of as the society for Interaction Design.)

Donald A. Norman and Jakob Nielsen

Nielsen Norman group

One step forward, two steps back.

The usability crisis is upon us, once again. We suspect most of you thought it was over. After all, HCI certainly understands how to make things usable, so the emphasis has shifted to more engaging topics, such as exciting new applications, new technological developments, and the challenges of social networks and ubiquitous connection and communication. Well you are wrong.

In a recent column for Interactions (reference 2) Norman pointed out that the rush to develop gestural interfaces - "natural" they are sometimes called - well-tested and understood standards of interaction design were being overthrown, ignored, and violated. Yes, new technologies require new methods, but the refusal to follow well-tested, well-established principles leads to usability disaster.

Recently, Raluca Budui and Hoa Loranger from the Nielsen Norman group performed usability tests on Apple's iPad (reference 1), reaching much the same conclusion. The new applications for gestural control in smart cellphones (notably the iPhone and the Android) and the coming arrival of larger screen devices built upon gestural operating systems (starting with Apple's iPad) promise even more opportunities for well-intended developers to screw things up. Nielsen put it this way: "The first crop of iPad apps revived memories of Web designs from 1993, when Mosaic first introduced the image map that made it possible for any part of any picture to become a UI element. As a result, graphic designers went wild: anything they could draw could be a UI, whether it made sense or not. It's the same with iPad apps: anything you can show and touch can be a UI on this device. There are no standards and no expectations."

Why are we having trouble? Several reasons:

· The lack of established guidelines for gestural control

· The misguided insistence by companies (e.g., Apple and Google) to ignore established conventions and establish ill-conceived new ones.

· The developer community's apparent ignorance of the long history and many findings of HCI research which results in their feeling of empowerment to unleash untested and unproven creative efforts upon the unwitting public.

In comments to Nielsen's article about our iPad usability studies, some critics claimed that it is reasonable to experiment with radically new interaction techniques when given a new platform. We agree. But the place for such experimentation is in the lab. After all, most new ideas fail, and the more radically they depart from previous best practices, the more likely they are to fail. Sometimes, a radical idea turns out to be a brilliant radical breakthrough. Those designs should indeed ship, but note that radical breakthroughs are extremely rare in any discipline. Most progress is made through sustained, small incremental steps. Bold explorations should remain inside the company and university research laboratories and not be inflicted on any customers until those recruited to participate in user research have validated the approach.

There are several important fundamental principles of interaction design that are completely independent of technology:

· Visibility (also called perceived affordances or signifiers)

· Feedback

· Consistency (also known as standards)

· Non-destructive operations (hence the importance of undo)

· Discoverability: All operations can be discovered by systematic exploration of menus

· Scalability. The operation should work on all screen sizes, small and large.

· Reliability. Operations should work. Period. And events should not happen randomly.

All these are rapidly disappearing from the toolkit of designers, aided, we must emphasize, by the weird design guidelines issued by Apple, Google, and Microsoft.

What are we talking about? Let us explain.

Non-existing signifiers

In Apple Mail, to delete an unread item, swipe right across the unopened mail and a dialog appears, allowing you to delete the item. Open the email and the same operation has no result. In the Apple calendar, the operation does not work. How is anyone to know, first, that this magical gesture exists, and second, whether it operates in any particular setting?

With the Android, pressing and holding on an unopened email brings up a menu which allows, among other items, deletion. Open the email and the same operation has no result. In the Google calendar, the same operation has no result. How is anyone to know, first, that this magical gesture exists, and second, whether it operates in any particular setting?

Whenever we discus these examples with others, we invariably get two reactions. One is "gee, I didn't know that." The other is, "did you know that if you this (followed by some exotic swipe, multi-fingered tap, or prolonged touch) that the following happens?" Usually it is then our turn to look surprised and say "no we didn't know that." This is no way to have people learn how to use a system.
Misleading signifiers

In the Android phone, there are four permanent controls at the bottom of the screen: back, menu, home, and search. They are always visible, suggesting that they are always operative. True for three out of the four, but not for the menu button. This visible menu button implies that there is a menu available, but no, many applications (and places within applications) don't have menus and even those that do don't always have them everywhere. There is no way to tell without pushing the button and discovering that nothing happens. (Actually, it means multiple pushes because the lack of a response the first time may reflect the unreliability of the technology.)

Worse, when on the home screen, pushing the menu will occasionally bring up the on-screen keyboard. Usually a second push of the same key undoes the action done by the first, but in this case, the second push brings up a menu which floats above the keyboard. (The keyboard does not always appear. Despite much experimentation, we are unable to come up with the rules that govern when this will or will not occur.)

Both Apple and Android recommend multiple ways to return to a previous screen. Unfortunately, for any given implementation, the method used seems to depend upon the whim of the designer. Sometimes one can swipe the screen to the right or downwards. Usually, one uses the back button. In the iPhone, if you are lucky, there is a labeled button. (If not, try swiping in all directions and pushing everything visible on the screen.) With the Android, the permanently visible back button provides one method, but sometimes the task is accomplished by sliding the screen to the right. The back button has a major flaw, however. Push the back button to go to the previous page, then again, and then again. And oops, suddenly you are out of the application. No feedback that the next button no longer moves on inside the application but takes you out. (The same flaw exists on the Blackberry.)

In the Android, the back button moves the user through the activities stack, which always includes the originating activity: home. But this programming decision should not be allowed to impact the user experience: falling off the cliff of the application on to the home screen is not good usability practice. (Note too that the stack on the Android does not include all the elements that the user model would include: it explicitly leaves out views, windows, menus, and dialogs.)

Yes, provide a back button - or perhaps call it a dismiss button, but make it follow the user's model of "going back," not the programmer's model that is incorporated into the Activity Stack of the OS. Among other things, it should have a hard stop when at the top level of the application. Allowing it to exit the application is wrong.
Consistency and Standards

Whatever happened to the distinction between radio buttons and checkboxes? Radio buttons meant selection of only one out of all the possibilities: selecting one removed the selection of others. Check boxes, however, allow one to select multiple alternatives. Not with these new systems: Check boxes can work any way the whim of the developer decides, often to the distress of the poor person trying to use the system.

Some applications allow pinching to change scale of an image, others use plus and minus boxes. Some allow you to flip screens up, some down, some to the right, some to the left, and some not at all. Touching an image can enlarge it, hyperlink from it, flip it over, unlock it so it can be moved, or whatever the whim of the developer decided.

The different operating system developers have provided detailed Human Interface Guidelines for their products. Unfortunately, the guidelines differ from one another, in part because different companies wish to protect their intellectual property by not allowing other companies to follow their methods. But whatever the reason, proprietary standards make life more difficult for everyone. For sure, they undermine the main way users learn: from each other.

The true advantage of the Graphical User Interface, GUI, was that commands no longer had to be memorized. Instead, every possible action in the interface could be discovered through systematic exploration of the menus. Discoverability is another important principle that has now disappeared. Apple specifically recommends against the use of menus. Android recommends it, even providing a dedicated menu key, but does not require that it always be active. Moreover, swipes and gestures cannot readily be incorporated in menus: So far, nobody has figured out how to inform the person using the app what the alternatives are.

Home computers, whether laptop or desktop, always came with a wide variety of screen sizes. Now that computer operating systems are starting to support multi-touch screen, this means that gestures have to work on large screens as well as small. There is a plethora of screen sizes for cellphones. And the emergence of an in-between breed of pads, we have mid-sized screens. So the screens will range from tiny to huge, conceivably wall size (or at least, whiteboard-sized). Gestures that work well for small screen fail for large ones, and vice versa. Small little checkboxes and other targets that work well with mice and stylus are inappropriate for fingers to hit with precision. Larger screens have their own problems with control sizes. Are the new controls to be used while held in the hand, laid flat upon a surface, or tilted at an angle? All varieties now exist.

Sensitive screens give many opportunities for accidental selection and triggering of actions. This happens on small screens because the target items might be are small and close together. This happens on large screens because the same hands necessary to hold and stabilize the device can accidentally touch the screen.


Accidental activation is common in gestural interfaces, as users happen to touch something they didn't mean to touch. Conversely, frequently users do intend to touch a control or issue a gestural command but nothing happens because their touch or gesture was a little bit off. Since gestures are invisible, users often don't know that they made these mistakes. Also, a basic foundation of usability is that errors are not the user's fault; they are the system's (or designer's) fault for making it too easy to commit the error.

Traditional GUIs do have similar problems, for example when the mouse is clicked one pixel outside the icon a user intended to activate. But at least the mouse pointer is visible on the screen so that the user can see that it's a bit off.

When users think they did one thing but "really" did something else, they lose their sense of controlling the system because they don't understand the connection between actions and results. The user experience feels random and definitely not empowering.

Some reliability issues can be alleviated by following usability guidelines such as using larger objects and surrounding them with generous click zones. Others are inherent in any new technology that will have its bugs and not work perfectly. This is that much more reason to enhance user empowerment by designing according to the other interaction principles we have listed in this article.

Lack of undo

Undo! One of the most brilliant inventions of usable computer interfaces seems mostly to have been forgotten. It is very difficult to recover from accidental selections or checking of boxes. First, the result often takes one to a new location. Second, it may not even be obvious what action got you there. For example, if a finger accidentally scrapes an active region, triggering an action, because the trigger was unintentional and subconscious, there is almost no way to know why the resulting action took place.

Novel Interaction Methods

Gestural systems do require novel interaction methods. Indeed, this is one of their virtues: we can use the body. We can tilt and shake, rotate and touch, poke and probe. The results can be extremely effective while also conveying a sense of fun and pleasure. But these interaction styles are still in their infancy, so it is only natural to expect that a great deal of exploration and study still needs to be done.

Shaking has become a standard way of requesting another choice, a choice that seems to have been discovered accidentally, but that also feels natural and proper. Note, however, that although it is easy and fun to shake a small cellphone, shaking a large pad is neither easy nor much fun. Scrolling through long lists can now be done by rapid swiping of the fingers providing some visual excitement, but we still need to work out the display dynamics, allowing the items to gather speed, to keep going through a form of "momentum," yet to make it possible to see where in the list one is even while it whizzes past, and to enable rapid stopping once the desired location seems near.

Although pinching and spreading seem natural ways of zooming an object out and in, when the dynamics are badly set, the movements are difficult to control. Different applications today use different rules, which ends up confusing people. Moreover not all places allow this, even if they could, another source of confusion.

Rotation and tilting the device also is often used to change the display, although for some applications, such as reading, it has been found necessary to provide a lock to prevent the otherwise natural rotation of the displayed image that would prevent easy reading.

The Promise of Gestural Interfaces

The new interfaces can be a pleasure to use, a pleasure to see. They also offer the possibility of scaling back the sometimes heavy-handed visual language of traditional GUIs that were designed back when nobody had seen a scrollbar. In the early 1980s usability demanded GUI elements that fairly screamed "click me." Desktop GUIs are already less neon-colored than Windows 3.0 and we can afford to dial-back the visual prominence a bit more on tablets, which will further enhance their aesthetics. But dialed-back doesn't mean "invisible."

The new displays promises to revolutionize our media: news and opinion pieces can be dynamic, with short video instead of still photographs, adjustable, manipulatable figures instead of static diagrams. Consumer Reports could publish its rating tables with reader-controlled weights, so each viewer would have a tailored set of recommendations based upon standardized test results.

The new devices are also fun to use: gestures add a welcome feeling of activity to the otherwise joyless ones of pointing and clicking.

But the lack of consistency, inability to discover operations, coupled with the ease of accidentally triggering actions from which there is no recovery threatens the viability of these systems.

We urgently need to return to our basics, developing usability guidelines for these systems that are based upon solid principles of interaction design, not on the whims of the company human interface guidelines and arbitrary ideas of developers.

Don Norman and Jakob Nielsen are co-founders of the Nielsen Norman group. Norman is Professor at Northwestern University, Visiting Professor at KAIST (South Korea), and author, his latest book being Living with Complexity. Nielsen founded the "discount usability engineering" movement for interface design and has invented several usability methods, including heuristic evaluation. He holds 79 United States patents, mainly on ways of making the Internet easier to use. Norman can be found at jnd.org. Nielsen is at useit.com


[1] Nielsen, J. (2010): iPad Usability: First Findings From User Testing. Jakob Nielsen's Alertbox, April 26, 2010. http://www.useit.com/alertbox/ipad.html

[2] Norman, D. A. (2010). Natural User Interfaces Are Not Natural. Interactions, 17, No. 3 (May - June). http://interactions.acm.org/content/?p=1355

[1] Column written for Interactions. © CACM. This is the authors' version of the work. It is posted here by permission of ACM for your personal use. It may be redistributed for non-commercial use only, provided this paragraph is included. The definitive version will be published in Interactions.

Wednesday, May 11, 2011

Talk with a dolphin via underwater translation machine


* 09 May 2011 by MacGregor Campbell

Editorial: "The implications of interspecies communication"

A DIVER carrying a computer that tries to recognise dolphin sounds and generate responses in real time will soon attempt to communicate with wild dolphins off the coast of Florida. If the bid is successful, it will be a big step towards two-way communication between humans and dolphins.

Since the 1960s, captive dolphins have been communicating via pictures and sounds. In the 1990s, Louis Herman of the Kewalo Basin Marine Mammal Laboratory in Honolulu, Hawaii, found that bottlenose dolphins can keep track of over 100 different words. They can also respond appropriately to commands in which the same words appear in a different order, understanding the difference between "bring the surfboard to the man" and "bring the man to the surfboard", for example.

But communication in most of these early experiments was one-way, says Denise Herzing, founder of the Wild Dolphin Project in Jupiter, Florida. "They create a system and expect the dolphins to learn it, and they do, but the dolphins are not empowered to use the system to request things from the humans," she says.

Since 1998, Herzing and colleagues have been attempting two-way communication with dolphins, first using rudimentary artificial sounds, then by getting them to associate the sounds with four large icons on an underwater "keyboard".

By pointing their bodies at the different symbols, the dolphins could make requests - to play with a piece of seaweed or ride the bow wave of the divers' boat, for example. The system managed to get the dolphins' attention, Herzing says, but wasn't "dolphin-friendly" enough to be successful.

Herzing is now collaborating with Thad Starner, an artificial intelligence researcher at the Georgia Institute of Technology in Atlanta, on a project named Cetacean Hearing and Telemetry (CHAT). They want to work with dolphins to "co-create" a language that uses features of sounds that wild dolphins communicate with naturally.

Knowing what to listen for is a huge challenge. Dolphins can produce sound at frequencies up to 200 kilohertz - around 10 times as high as the highest pitch we can hear - and can also shift a signal's pitch or stretch it out over a long period of time.

The animals can also project sound in different directions without turning their heads, making it difficult to use visual cues alone to identify which dolphin in a pod "said" what and to guess what a sound might mean.

To record, interpret and respond to dolphin sounds, Starner and his students are building a prototype device featuring a smartphone-sized computer and two hydrophones capable of detecting the full range of dolphin sounds.

A diver will carry the computer in a waterproof case worn across the chest, and LEDs embedded around the diver's mask will light up to show where a sound picked up by the hydrophones originates from. The diver will also have a Twiddler - a handheld device that acts as a combination of mouse and keyboard - for selecting what kind of sound to make in response.

Herzing and Starner will start testing the system on wild Atlantic spotted dolphins (Stenella frontalis) in the middle of this year. At first, divers will play back one of eight "words" coined by the team to mean "seaweed" or "bow wave ride", for example. The software will listen to see if the dolphins mimic them. Once the system can recognise these mimicked words, the idea is to use it to crack a much harder problem: listening to natural dolphin sounds and pulling out salient features that may be the "fundamental units" of dolphin communication.

The researchers don't know what these units might be. But the algorithms they are using are designed to sift through any unfamiliar data set and pick out interesting features (see "Pattern detector"). The software does this by assuming an average state for the data and labelling features that deviate from it. It then groups similar types of deviations - distinct sets of clicks or whistles, say - and continues to do so until it has extracted all potentially interesting patterns.

Once these units are identified, Herzing hopes to combine them to make dolphin-like signals that the animals find more interesting than human-coined "words". By associating behaviours and objects with these sounds, she may be the first to decode the rudiments of dolphins' natural language.

Justin Gregg of the Dolphin Communication Project, a non-profit organisation in Old Mystic, Connecticut, thinks that getting wild dolphins to adopt and use artificial "words" could work, but is sceptical that the team will find "fundamental units" of natural dolphin communication.

Even if they do, deciphering their meanings and using them in the correct context poses a daunting challenge. "Imagine if an alien species landed on Earth wearing elaborate spacesuits and walked through Manhattan speaking random lines from The Godfather to passers-by," he says.

"We don't even know if dolphins have words," Herzing admits. But she adds, "We could use their signals, if we knew them. We just don't."
Pattern detector

The software that Thad Starner is using to make sense of dolphin sounds was originally designed by him and a former student, David Minnen, to "discover" interesting features in any data set. After analysing a sign-language video, the software labelled 23 of 40 signs used. It also identified when the person started and stopped signing, or scratched their head.

The software has also identified gym routines - dumb-bell curls, for example - by analysing readings from accelerometers worn by the person exercising, even though the software had not previously encountered such data. However, Starner cautions that if meaning must be ascribed to the patterns picked out by the software, then this will require human input.

U.S. Return to Gold Standard in 5 yrs. - Forbes Predicts


Annoying pop-up on article, copy below:

by Paul Dykewicz

A return to the gold standard by the United States within the next five years now seems likely, because that move would help the nation solve a variety of economic, fiscal, and monetary ills, Steve Forbes predicted during an exclusive interview this week with HUMAN EVENTS.

“What seems astonishing today could become conventional wisdom in a short period of time,” Forbes said.

Such a move would help to stabilize the value of the dollar, restore confidence among foreign investors in U.S. government bonds, and discourage reckless federal spending, the media mogul and former presidential candidate said. The United States used gold as the basis for valuing the U.S. dollar successfully for roughly 180 years before President Richard Nixon embarked upon an experiment to end the practice in the 1970s that has contributed to a number of woes that the country is suffering from now, Forbes added.

If the gold standard had been in place in recent years, the value of the U.S. dollar would not have weakened as it has and excessive federal spending would have been curbed, Forbes told HUMAN EVENTS. The constantly changing value of the U.S. dollar leads to marketplace uncertainty and consequently spurs speculation in commodity investing as a hedge against inflation.

The only probable 2012 U.S. presidential candidate who has championed a return to the gold standard so far is Rep. Ron Paul (R.-Tex.). But the idea “makes too much sense” not to gain popularity as the U.S. economy struggles to create jobs, recover from a housing bubble induced by the Federal Reserve’s easy-money policies, stop rising gasoline prices, and restore fiscal responsibility to U.S. government’s budget, Forbes insisted.

With a stable currency, it is “much harder” for governments to borrow excessively, Forbes said. Without lax Federal Reserve System monetary policies that led to the printing of too much money, the housing bubble would not have been nearly as severe, he added.

“When it comes to exchange rates and monetary policy, people often don’t grasp” what is at stake for the economy, Forbes said. By restoring the gold standard, the United States would shift away from “less responsible policies” and toward a stronger dollar and a stronger America, he said. “If the dollar was as good as gold, other countries would want to buy it.”

An encouraging sign for Forbes is that key lawmakers besides Rep. Paul are recognizing that the Fed is straying well beyond its intended role of promoting stable prices and full employment with its monetary policies.

Forbes cited Rep. Paul Ryan (R.-Wis.), who, he believes, understands monetary policy better than most lawmakers and has shown a willingness to ask tough but necessary questions. For example, when Federal Reserve Chairman Ben Bernanke appeared before the House Budget Committee in February, Ryan, who chairs the panel, asked Bernanke bluntly how many jobs the Fed’s quantitative-easing program had helped to create.

Politicians need to “get over” the notion that the Fed can guide the economy with monetary policy. The Fed is like a “bull in a China shop," Forbes said. “It can’t help but knock things down.”

“People know that something is wrong with the dollar," Forbes concluded. "You cannot trash your money without repercussions.”

Paul Dykewicz is the editorial director of the Financial Publications Group at Eagle Publishing Inc., www.eaglepub.com, of Washington, D.C. Eagle publishes two free, e-letters, five weekly trading services and four monthly investment newsletters, Forecasts & Strategies, Successful Investing, High Monthly Income and Global Stock Investor.

Friday, May 6, 2011

UK NEWS FURIOUS BIN LADEN SUPPORTERS VOW TO TAKE REVENGE Read more: http://www.express.co.uk/posts/view/245148/Furious-bin-Laden-supporters-vow-to-ta


By Brendan Abbot

Comment Speech Bubble Have your say(9)

HUNDREDS of Osama bin Laden supporters clashed with English Defence League extremists today as a “funeral service” for the assassinated terror leader sparked fury outside London’s US Embassy.

Police stepped in to separate the chanting groups amid threats of violence from both sides.

US leaders were branded “murderers” by radicals, who warned vengeance attacks were “guaranteed” and shouted: "USA, you will pay."

Protesters carried signs declaring 'Islam will dominate the world' and Jihad to defend the Muslims' as well as banners attacking the wars in Iraq and Afghanistan.

The pro-bin Laden 'funeral' took place as relatives of the 7/7 terror attack on London - which claimed 52 lives - wept at the inquest into the atrocity just three miles away.

It was organised by controversial preacher Anjem Choudary, who told reporters after the 'service' that America had created a new generation of Islamic terrorists.

Muslim women demonstrators pray outside of the US embassy in London today

He said: "There will be one million Osamas. Muslims will remember Osama as a great man who stood up against Satan. Many will want to emulate his acts.

"In Britain we have other options - like political action, but in other countries if your land is attacked or your family are put at risk you must defend yourself.

"We believe in the covenant of security that we must attack those we live with, but many do not."

The group began their march from the Regents Park Mosque where they tried to recruit some of the thousands who prayed there.

Choudary said: "Who comes is who comes. I am happy if there was three of us. We would still have this demonstration."

The pro-bin Laden event was organised by controversial preacher Anjem Choudary

Abu Muaz, 28, from east London, added: "It is only a matter of time before another atrocity - the West is the enemy.”

However, another man who prayed at the Mosque said the group were a dangerous minority. He said: "They are crazy. They all benefit from UK education and UK benefits.

"They get everything for free and yet they still complain. They are not Islam, they are for Osama.

"You see the people walking past and ignoring them. Most Muslims have better things to do then this."

Meanwhile, EDL members chanted “USA, USA” as Muslims knelt to pray for bin Laden at the opposite end of the highly-secured embassy, in central London.

An ambulance was called to the scene amid reports that one of the extremists had been attacked.

Britain has followed the US in placing its embassies, diplomatic missions and military bases around the world on heightened alert in recent days.

The US said the decision to drop bin Laden’s body into the North Arabian Sea was taken to avoid creating a shrine for the dead al Qaida chief.

An EDL member slipped through police lines to unveil an effigy of bin Laden in the middle of the 300-strong group of extremist Muslims.

It prompted screams of “USA, burn in hell” and “Obama, burn in hell” from angry protesters.

Read more: http://www.express.co.uk/posts/view/245148/Furious-bin-Laden-supporters-vow-to-take-revenge#ixzz1LbrcZBar

Waterbear - a visual language for Javacript


Try it out here:



Written by Mike James

Waterbear, a new "Scratch- like" visual programming language made its debut at a JavaScript conference this week.

Waterbear is the brainchild of Dethe Elza who presented it at the JSConf held in Portland on 2-3 May 2011. Inspired by Alan Kay's Squeak language and Semour Papert's Mindstorms he hope it will introduce programming concepts to learners, including children. His choice of the name Waterbear is because he want it to be an extremely robust language - like the microscopic animals that are found in extreme environments over he entire world.

This news item is a lot easier to understand if you already know about the Scratch programming language. Scratch is a visual language aimed at beginners and children in particular. You don't write a program in Scratch you assemble it by dragging and dropping blocks representing programming constructs onto a design surface. You still have to set values via parameter slots within the blocks but the difficult task of creating the program's flow of control is reduced to clicking blocks together.


Scratch isn't the only language to take the visual approach. The Android App Inventor uses the same idea to let you create mobile phone apps. One of the first such visual systems was the robot programming language packaged with the Lego Mindstorms kits. Now we have something a little different. Waterbear is a visual programming language that generates JavaScript. Scratch just creates the program and you run it in the Scratch environment but Waterbear is a compiler or translator from the visual language to JavaScript. It actually doesn't create pure JavaScript as it uses the jQuery library fairly heavily.


As the whole thing runs in a web page the JavaScript it creates already has an environment to run in and you can create and test a program without having to install anything additional - this is also one of the first fully web hosted development systems. Not only is it web hosted it is, of course, written in HTML5/CSS3 and Javascript. You can download the code or join in the project at github. You can also try the whole thing out at http://waterbearlang.com/ but be warned it is a very early alpha.

As well as a really interesting educational tool you can't help but speculate on its use as a real HTML5 based app. Maybe this is the tool we are looking for!?