Monday, March 24, 2014

Honda Batbike - NM4 Vultus





http://www.bbc.com/autos/story/20140322-honda-builds-a-batbike

Honda has revealed a new motorcycle called the NM4 Vultus. While not the most inspiring name, it does at least look really stealth. Cool stealth. Batbike-level stealth.
It's not a Batbike, of course, but is instead inspired by ‘Japanimation' - that's anime and manga - both genres long interwoven into the fabric of Japanese life and culture. Just last week we found out how Toyota is now plumbing this as inspiration for the new Aygo.

Reference is made to a ‘stealth bomber silhouette', and it measures 933mm across the mirrors, while the seat sits at 650mm high. There are full LED headlights too, while everything else comes in black and stainless steel. Some concession to colour has been made: the digital dash changes depending on mood, ranging from white, through blue and pink and finally red.

Underneath there's a 745cc twin-cylinder engine canted forward, with a low centre of gravity delivering "strong low and mid-range power and torque". It produces 54bhp and 50lb ft, with twin balance shafts and a single 36mm throttle body. Honda reckons it's efficient, too, offering up 185 miles from a single 11.6-litre tank.

The engine is mated to a six-speed dual-clutch gearbox, and it's all mounted on a steel diamond frame that weighs in at just 245kg. So then, while no performance figures have been announced, it'll be fast; that power-to-weight ratio stands at around 225bhp per tonne (the same as an old Honda NSX, if we're not mistaken).

"The NM4 Vultus with its future-shock style presents a look that will not have been seen in any cityscape this side of an anime movie," Honda says. And yes, while we're not TopBike.com, Richard and James never stop talking about them, and it's certainly a cool bike, no? Though maybe not as cool as those Lotus-liveried bikes we saw a while back...

TDCS thinking cap

http://scienceblog.com/71202/electric-thinking-cap-controls-learning-speed/

Electric ‘thinking cap’ controls learning speed


March 23, 2014
Brain & Behavior, Technology
Caffeine-fueled cram sessions are routine occurrences on any college campus. But what if there was a better, safer way to learn new or difficult material more quickly? What if “thinking caps” were real?
In a new study published in the Journal of Neuroscience, Vanderbilt psychologists Robert Reinhart, a Ph.D. candidate, and Geoffrey Woodman, assistant professor of psychology, show that it is possible to selectively manipulate our ability to learn through the application of a mild electrical current to the brain, and that this effect can be enhanced or depressed depending on the direction of the current.
The medial-frontal cortex is believed to be the part of the brain responsible for the instinctive “Oops!” response we have when we make a mistake. Previous studies have shown that a spike of negative voltage originates from this area of the brain milliseconds after a person makes a mistake, but not why. Reinhart and Woodman wanted to test the idea that this activity influences learning because it allows the brain to learn from our mistakes. “And that’s what we set out to test: What is the actual function of these brainwaves?” Reinhart said. “We wanted to reach into your brain and causally control your inner critic.”
Reinhart and Woodman set out to test several hypotheses: One, they wanted to establish that it is possible to control the brain’s electrophysiological response to mistakes, and two, that its effect could be intentionally regulated up or down depending on the direction of an electrical current applied to it. This bi-directionality had been observed before in animal studies, but not in humans. Additionally, the researchers set out to see how long the effect lasted and whether the results could be generalized to other tasks.

Stimulating the brain

Using an elastic headband that secured two electrodes conducted by saline-soaked sponges to the cheek and the crown of the head, the researchers applied 20 minutes of transcranial direct current stimulation (tDCS) to each subject. In tDCS, a very mild direct current travels from the anodal electrode, through the skin, muscle, bones and brain, and out through the corresponding cathodal electrode to complete the circuit. “It’s one of the safest ways to noninvasively stimulate the brain,” Reinhart said. The current is so gentle that subjects reported only a few seconds of tingling or itching at the beginning of each stimulation session.
In each of three sessions, subjects were randomly given either an anodal (current traveling from the electrode on the crown of the head to the one on the cheek), cathodal (current traveling from cheek to crown) or a sham condition that replicated the physical tingling sensation under the electrodes without affecting the brain. The subjects were unable to tell the difference between the three conditions.

The learning task

After 20 minutes of stimulation, subjects were given a learning task that involved figuring out by trial and error which buttons on a game controller corresponded to specific colors displayed on a monitor. The task was made more complicated by occasionally displaying a signal for the subject not to respond—sort of like a reverse “Simon Says.” For even more difficulty, they had less than a second to respond correctly, providing many opportunities to make errors—and, therefore, many opportunities for the medial-frontal cortex to fire.
The researchers measured the electrical brain activity of each participant. This allowed them to watch as the brain changed at the very moment participants were making mistakes, and most importantly, allowed them to determine how these brain activities changed under the influence of electrical stimulation.

Controlling the inner critic

When anodal current was applied, the spike was almost twice as large on average and was significantly higher in a majority of the individuals tested (about 75 percent of all subjects across four experiments). This was reflected in their behavior; they made fewer errors and learned from their mistakes more quickly than they did after the sham stimulus. When cathodal current was applied, the researchers observed the opposite result: The spike was significantly smaller, and the subjects made more errors and took longer to learn the task. “So when we up-regulate that process, we can make you more cautious, less error-prone, more adaptable to new or changing situations—which is pretty extraordinary,” Reinhart said.
The effect was not noticeable to the subjects—their error rates only varied about 4 percent either way, and the behavioral adjustments adjusted by a matter of only 20 milliseconds—but they were plain to see on the EEG. “This success rate is far better than that observed in studies of pharmaceuticals or other types of psychological therapy,” said Woodman.
The researchers found that the effects of a 20-minute stimulation did transfer to other tasks and lasted about five hours.
The implications of the findings extend beyond the potential to improve learning. It may also have clinical benefits in the treatment of conditions like schizophrenia and ADHD, which are associated with performance-monitoring deficits.

Read more at http://scienceblog.com/71202/electric-thinking-cap-controls-learning-speed/#yWJVSeBYVxZC8m1P.99

Electric ‘thinking cap’ controls learning speed


Caffeine-fueled cram sessions are routine occurrences on any college campus. But what if there was a better, safer way to learn new or difficult material more quickly? What if “thinking caps” were real?
In a new study published in the Journal of Neuroscience, Vanderbilt psychologists Robert Reinhart, a Ph.D. candidate, and Geoffrey Woodman, assistant professor of psychology, show that it is possible to selectively manipulate our ability to learn through the application of a mild electrical current to the brain, and that this effect can be enhanced or depressed depending on the direction of the current.

Read more at http://scienceblog.com/71202/electric-thinking-cap-controls-learning-speed/#yWJVSeBYVxZC8m1P.99
The implications of the findings extend beyond the potential to improve learning. It may also have clinical benefits in the treatment of conditions like schizophrenia and ADHD, which are associated with performance-monitoring deficits.
Read more at http://scienceblog.com/71202/electric-thinking-cap-controls-learning-speed/#yWJVSeBYVxZC8m1P.99Electric ‘thinking cap’ controls learning speed
 

Electric ‘thinking cap’ controls learning speed


March 23, 2014
Brain & Behavior, Technology
Caffeine-fueled cram sessions are routine occurrences on any college campus. But what if there was a better, safer way to learn new or difficult material more quickly? What if “thinking caps” were real?
In a new study published in the Journal of Neuroscience, Vanderbilt psychologists Robert Reinhart, a Ph.D. candidate, and Geoffrey Woodman, assistant professor of psychology, show that it is possible to selectively manipulate our ability to learn through the application of a mild electrical current to the brain, and that this effect can be enhanced or depressed depending on the direction of the current.
The medial-frontal cortex is believed to be the part of the brain responsible for the instinctive “Oops!” response we have when we make a mistake. Previous studies have shown that a spike of negative voltage originates from this area of the brain milliseconds after a person makes a mistake, but not why. Reinhart and Woodman wanted to test the idea that this activity influences learning because it allows the brain to learn from our mistakes. “And that’s what we set out to test: What is the actual function of these brainwaves?” Reinhart said. “We wanted to reach into your brain and causally control your inner critic.”
Reinhart and Woodman set out to test several hypotheses: One, they wanted to establish that it is possible to control the brain’s electrophysiological response to mistakes, and two, that its effect could be intentionally regulated up or down depending on the direction of an electrical current applied to it. This bi-directionality had been observed before in animal studies, but not in humans. Additionally, the researchers set out to see how long the effect lasted and whether the results could be generalized to other tasks.

Stimulating the brain

Using an elastic headband that secured two electrodes conducted by saline-soaked sponges to the cheek and the crown of the head, the researchers applied 20 minutes of transcranial direct current stimulation (tDCS) to each subject. In tDCS, a very mild direct current travels from the anodal electrode, through the skin, muscle, bones and brain, and out through the corresponding cathodal electrode to complete the circuit. “It’s one of the safest ways to noninvasively stimulate the brain,” Reinhart said. The current is so gentle that subjects reported only a few seconds of tingling or itching at the beginning of each stimulation session.
In each of three sessions, subjects were randomly given either an anodal (current traveling from the electrode on the crown of the head to the one on the cheek), cathodal (current traveling from cheek to crown) or a sham condition that replicated the physical tingling sensation under the electrodes without affecting the brain. The subjects were unable to tell the difference between the three conditions.

The learning task

After 20 minutes of stimulation, subjects were given a learning task that involved figuring out by trial and error which buttons on a game controller corresponded to specific colors displayed on a monitor. The task was made more complicated by occasionally displaying a signal for the subject not to respond—sort of like a reverse “Simon Says.” For even more difficulty, they had less than a second to respond correctly, providing many opportunities to make errors—and, therefore, many opportunities for the medial-frontal cortex to fire.
The researchers measured the electrical brain activity of each participant. This allowed them to watch as the brain changed at the very moment participants were making mistakes, and most importantly, allowed them to determine how these brain activities changed under the influence of electrical stimulation.

Controlling the inner critic

When anodal current was applied, the spike was almost twice as large on average and was significantly higher in a majority of the individuals tested (about 75 percent of all subjects across four experiments). This was reflected in their behavior; they made fewer errors and learned from their mistakes more quickly than they did after the sham stimulus. When cathodal current was applied, the researchers observed the opposite result: The spike was significantly smaller, and the subjects made more errors and took longer to learn the task. “So when we up-regulate that process, we can make you more cautious, less error-prone, more adaptable to new or changing situations—which is pretty extraordinary,” Reinhart said.
The effect was not noticeable to the subjects—their error rates only varied about 4 percent either way, and the behavioral adjustments adjusted by a matter of only 20 milliseconds—but they were plain to see on the EEG. “This success rate is far better than that observed in studies of pharmaceuticals or other types of psychological therapy,” said Woodman.
The researchers found that the effects of a 20-minute stimulation did transfer to other tasks and lasted about five hours.
The implications of the findings extend beyond the potential to improve learning. It may also have clinical benefits in the treatment of conditions like schizophrenia and ADHD, which are associated with performance-monitoring deficits.

Read more at http://scienceblog.com/71202/electric-thinking-cap-controls-learning-speed/#yWJVSeBYVxZC8m1P.99

Wednesday, March 19, 2014

Sony Virtual Reality Headset For PS4


From slashdot.org

"Sony has announced 'Project Morpheus,' their project to develop a virtual reality headset for use with the PlayStation 4. 'Using a combination of Sony's own hardware, combining personal video viewers with PlayStation Move controllers, PlayStation engineers experimented with multiple prototypes.' They've been working on it for over three years — here's a picture of the current incarnation. The headset will use 3D audio tech that changes as players move their heads. One of their big goals is to make it extremely simple to use. They intend the display to be 1080p with a 90-degree field of view."

Monday, March 17, 2014

Technology for Caricature


I have a friend who is an amazing caricature artist with no budget, but he owns an iPad. His real advantage is that he can draw quickly in real-time, no under drawing, in front of an audience and entertain.

I did some research and found some great options for iPad, and I thought of some ways to take advantage. A few advantages for a caricaturist on a computer of any kind is that you can project onto a big screen in front of a crowd, you can email the finished art to the client and they can post on Facebook without scanning. This makes for a great marketing strategy, requires a light weight travel solution. You can screen record yourself drawing and put on YouTube - imagine giving your client a video of the drawing process and a finished drawing. Imagine setting up a second camera to record the event and the crowd reaction. There are screen recording and time lapse apps for the iPad.



With tools like Skype or Google plus, a caricature artist with an iPad can even remote in to perform. 

With a video screen projection, the crowd can see the artist's desktop and the face of the subject as the caricature comes to life. Whether performing at a festival or corporate event, the screen projection allows everyone to see what's happening and draw more of a crowd without competing for space to watch. 

Again, it's easy to record everything - you can charge clients for the drawing and for the video - and now your client can post the image and/or video to social media. 

And, you might even set up a google help out page where people can pay you to make caricatures:
https://helpouts.google.com/home

Tools List

Procreate is apparently a highly rated, high-resolution drawing app + stylus - I've not used it but it looks good. Most other apps will not allow for high-res. which is important for printing.

Examples here:
http://procreate.si

Buy here for $6
https://itunes.apple.com/us/app/procreate/id425073498?mt=8&ign-mpt=uo%3D4

This stylus is said to be most sensitive - these are not cheap

https://tenonedesign.com/checkout.php?storeClient=affiliate&product=pogo%20connect&bundleid=au.com.savageinteractive.procreatehttp://the-gadgeteer.com/2013/11/10/hex3-jaja-pressure-sensitive-stylus-review/

This artist claims that using his finger is better than a stylus, you just customize your brush in the app:

http://studiojason.wordpress.com/2013/01/31/procreate-for-ipad-an-artists-must-have-app/

Screen recording:

screencast-o-matic - $15/year, records up to 2 hours
http://www.screencast-o-matic.com/

How to record your iPad
http://screencast-o-matic.com/watch/cInob2VIaL

Reference: http://som.screencasthost.com/


 $13 - You may want to use a trial version before purchase and look for other apps if you don't like it.
Time lapse apps for ipad - use this to speed up the movie of your drawing process
$2
https://itunes.apple.com/us/app/lapse-it-time-lapse-stop-motion/id539108382?mt=8

$3
https://itunes.apple.com/us/app/osnap!-time-lapse-stop-motion/id457402095?mt=8

Sunday, March 16, 2014

Code VS Design

I certainly appreciate the notion of this article, though code knowledge can really enlighten a designer and open up a tool box for creating more direct, elegant design. Many design elements are code-based and are important for making edits. There should be a fundamental understanding of what is code and what is graphic, and part of strategy includes understanding the entire product as a system for not only delivering experience, but for building in a flexible method of maintaining and modifying the experience over time.

Dialogue with engineers is needed for understanding what is possible, and this is where the Pegasus idea really becomes relevant. I'll admit, I've gotten away from front-end coding, and it's a use it or lose it scenario, keeping up with browsers, etc. Some people have a special knack for coding. I was fine with tables, but CSS added a level of tedium that I don't tolerate well. Along these lines, I'll add that I think it's important to untether from code and rely on a coding resource in order to create a more complex experience and focus on the IA, really think outside of the box, and concentrate on results. The key is to at least maintain close association with adjacent resources and that is certainly possible in a team environment.

http://uxmag.com/articles/unicorn-shmunicorn-be-a-pegasus

Unicorn, Shmunicorn: Be a Pegasus


If you’re reading this, you’re probably a designer. Maybe you code, maybe you don’t. But it’s likely you’re feeling more and more pressure to hone your programming skills and become that mythical product development creature who can both create compelling designs and write production code.
There are plenty of reasons why being a unicorn isn’t all it’s cracked up to be. But what you might not have considered is that aspiring to be a unicorn could be the biggest mistake of your career.

Conflict of Interest

Having a coder and designer in the same body is tricky. Coders must, above all, serve the machinery, the OS, and the programming language. They have to, or everything goes boom! in a really ugly way.
Meanwhile, as a designer, you focus on human-scale issues, and you’re comfortable grappling with the inconsistencies of human nature. Which is good, because there’s a metric ass-ton of those.
Both roles are essential to the creation of great software, and close collaboration between a stellar coder and a top-shelf designer—along with a solid product manager—is the fast track to a world-beating product.

But when you try to package these skills in a single person, conflicts emerge. What happens when user goals and technical constraints collide as deadlines loom? Do you build the best product for the user, or the product you can implement in the allotted time given your technical abilities?

The hybrid coder/designer is not a new idea. Coders who designed software by default were standard issue in the ‘80s and ‘90s. The result was a flood of badly designed products that made an entire generation of normal people feel exceedingly stupid as they thumbed through the pages of their _____ For Dummies books. In fact, the primary reason software has improved dramatically is due to the establishment of software design/UX/IxD/etc as a separate profession: your profession.
The movement toward unicorns reverses this progress by assimilating designers like yourself into the coding collective, shifting your attention from the user and to the technology—which is what got us into that mess in the first place.

Checking the UX Box

A singular aspect of the Great Unicorn Quest that should give you pause is the implication that user experience as a discipline isn’t significant enough to be a sole focus. As if designing products that people love isn’t sufficient to justify a full-time position.
I mean, sure, you can get to know your users and customers, determine needs, wants, and goals, create personas, invent a concept design, craft the interaction flows, produce detailed wireframes, design pixel-perfect mocks, respond to last-minute feature requests, create production assets, and a million other details I’m glossing over, but when are you going to do some real work? You know, like code something.
Don’t aspire to be a unicorn, digging up nitty-gritty coding grubs with your horn.tweet this
Frankly, if your company doesn’t feel that design is important enough to warrant a full-time position, you should question how committed they are to an awesome user experience—and, for that matter, how you want to spend the next few years of your professional life.

Drowning in Details

User experience requires a lot of detail work; flows, wireframes, edge cases—you know the drill. You may already be so consumed with reactive and detailed design work that you don’t have adequate time to explore big ideas with the potential to dramatically improve the user experience.
Well, there’s one sure-fire way to make this worse: spend lots of time worrying about the details of a neighboring discipline: programming. Why isn’t this build working? What library can I use for this? What’s with this jacked-up PHP code?

Your time is the ultimate zero-sum game. The more you spend on the complexity and details of coding, the less you have to make the product experience better for your users or to influence product strategy.

A Better Idea: Be a Pegasus

It’s time to think bigger and more strategically about your career. The software industry needs high-powered product people in VP Product and Chief Product Officer roles. Today, these positions tend to be filled by people who came up from marketing, product management, engineering, or general business backgrounds. And some of them are very good in these roles.

But who better to take on the product challenges of the future than cream-of-the-crop UX professionals? No one is closer to the intersection of people’s goals and a company’s products than the designers sweating over every detail of the user experience, day in and day out. Rather than re-inventing yourself as a part-time, mediocre coder, consider aiming your trajectory squarely at these product leadership positions.

Instead of diving into the tactical details of programming, level up: Shadow your product managers and learn how they operate. Take a deep dive into your company’s product roadmap. Explore your company’s market strategy. Discover the top three things that the CEO is concerned about. Understand the high-level strategies in play in all areas of the business.

Don’t aspire to be a unicorn, digging up nitty-gritty coding grubs with your horn. Unfurl your wings and see the 10,000-foot view of where your business is headed, then use your design perspective to help your company and industry soar to new heights.
Be a Pegasus.

"Pegasus" image by Hannah Photography.

Solar powered toilet


From Slashdot:

"With funding from the Bill and Melinda Gates Foundation's Reinvent the Toilet challenge, [a] team has developed a toilet that uses concentrated solar power to scorch and disinfect human waste, turning feces into a useful byproduct called biochar ... a sanitary charcoal material that is good for soils and agriculture. By converting solid waste to biochar (liquid waste is diverted elsewhere, as it's easier to deal with), the toilet thus allows for sanitary waste disposal without huge infrastructure investments. The toilet itself, called the Sol-Char, is a fascinating bit of engineering. In order to sanitize waste without the help of massive treatment facilities, Linden's team instead designed the toilet to scorch waste in a chamber heated by fiber optic cables that pipe in heat from solar collectors on the toilet's roof. 'A solar concentrator has all this light focused in on one centimeter. It'd be fine if we could bring everyone's fecal waste up to that one point, like burning it with a magnifying glass,' Linden said. 'But that's not practical, so we were thinking of other ways to concentrate that light.'"

Saturday, March 15, 2014

The Irony


HTML5 is loud and proud

Take a bite if it's ripe for the picking

Now Adobe and Steve are both in the cloud

but Flash is the one still kicking

Overcoming creative blocks

http://www.fastcompany.com/3026913/dialed/5-successful-authors-on-how-they-overcame-creative-blocks-to-write-their-first-book?partner=newsletter

5 Successful Authors On How They Overcame Creative Blocks To Write Their First Book

Agonizing over every sentence, losing years of research, receiving rejection after rejection--writing a book isn't always the divinely creative process it seems to be. Five authors share their struggle, and ultimate success, in completing their first books.
The best books seem to have an effortlessness to their writing, as though each word has been set down just where it needs to be. Nowhere on the page is the agonizing, writing, rewriting, not writing of getting it done visible. "Writing a long novel is like survival training," Haruki Murakami has said. "Physical strength is as necessary as artistic sensitivity."

And rarely does it come out right the first time around. Ernest Hemingway rewrote the last page of Farewell to Arms thirty-nine times before he was satisfied with it. John Cheever describes the act of finishing a book as "invariably something of a psychological shock."

All too often, there comes a point in a creative project when your progress seems to hit a wall--when the idea of ever finishing feels nearly impossible.

I spoke with five writers who recently finished their first book about how they dealt with such moments in their own work and what they did to overcome these creative blocks.

"Creating a community reminded me why we write."
-Julia Fierro, author of Cutting Teeth (St. Martin's Press)


Julia Fierro graduated from the Iowa Writers Workshop and wrote her first novel in seven months. After a year of rejections from one editor after the next, her confidence had been shattered and for the next seven years, she barely wrote. "I felt like I was leading a fraudulent life," says Fierro.
Feeling like she didn't fit into the New York literary scene, in 2003, Fierro posted an ad on Craigslist for a writing workshop. Eleven years later, what started as eight writers meeting in her Brooklyn kitchen has grown into The Sackett Street Writer's Workshop, which has had more than 2,000 writers in its classes and workshops over the years. Creating that community of writers is what got Fierro over her block. Toward the end of 2011, she wrote her book Cutting Teeth in less than a year. "I had lost my confidence for so long," she says. "I started Sacket Street as a place where I could hide. …It was really creating a community that reminded me why we write."
"You do the work when you're not in front of it."
-Ted Thompson, author of The Land of Steady Habits (Little Brown and Company)

When Ted Thompson sold his novel, The Land of Steady Habits to Little Brown and Company in 2010, he knew it wasn't quite ready. He wrote and rewrote the same scene countless times, took his dog on long walks, then sat in front of his computer screen staring. This went on for a year.
Then one afternoon, he deleted everything but the first 60 pages of the book, named this new document "Crazy Fucking Experiment" and let himself write. Before he knew it, he had 10 new pages, then 20, then 30. All of a sudden, the writing flowed. "You do the work when you're not in front of it," he says. "With a long project like that, it can start to feel like a reflection of your own identity. I think it's important to let it be a book."

"Enter your story in a different way."
-Mira Jacob, author of The Sleepwalker's Guide to Dancing (Random House)

With a 12-hour workday at an online media company and a young son at home, Mira Jacob found time to work on her novel from 11 p.m. to 1 a.m. Sometimes she'd wake up with her face on the keyboard. When Jacob got laid off, she decided to take three months to complete her book, which she'd been working on for nearly a decade. But when she finished writing, she felt stuck. "I'd gotten to the end and the magical thing that was supposed to, didn't happen," she says. "It was like looking at a Rubik's cube, thinking, 'I have to get it all to align. It's all there.'"

Jacob was stumped. Her husband, documentary filmmaker Jed Rothstein, told her to storyboard the plot, writing it out on index cards. As soon as she did that, she saw where the problems were. "If you enter the process differently, you can decode it and see it in a different way," says Jacob.
"Don't go down the rabbit hole."
-Amy Brill, author of The Movement of Stars (Riverhead)

It took Amy Brill 15 years to write her first novel. When she came upon her inspiration for the book--the Nantucket home of a girl astronomer from the 1800s--she decided to study everything she could about the topic. "I felt I needed to know everything about everything before I wrote even a page," she says. She gathered documents and transcribed journals and letters. After nine years of this, Brill had amassed a backpack of research and had only written 100 pages.

In 2006, Brill lugged her backpack of research to a writing residency in Spain. On her flight home, she checked the bag at the airport and never saw it again. The loss was devastating and for nearly two years, she couldn't work on the book. Then, pregnant with her first child, Brill told herself she would finish the book before her baby was born. "Even if it was drivel, I just had to push forward," she says. Brill wrote four pages a day every day, resisting the urge to get lost in research as she had before. Ten days before her daughter was born, she finished her first draft. "I learned to just go forward," she says. "Don't go down the rabbit hole."

"Sometimes you don’t know what you're writing until you've finished it."
-Vu Tran, author of This Or Any Desert (WW Norton & Company)

In 2009, Vu Tran won the Whiting Writers Award, an honor given to emerging writers that comes with a $50,000 prize. That same year, he signed a book deal with WW Norton for his first novel. By that point, Tran had only written the first 60 pages and planned to take 18 months to complete the book. But things did not go as planned.

Tran wanted every sentence to be just right before he went on to the next one. He spent a month and a half just working on the first three sentences of the book. Every time he tried to step back and think about the larger story, he became paralyzed with uncertainty. "I was trying too hard to figure out the rest of the novel so I could write towards it," he says. "I couldn't do that. The only way I could do it was to go sentence by sentence and just trust in the process." Tran completed the book in January of 2014. "That was the only way I was able to do it--inch by inch, word by word, sentence by sentence. … Sometimes you don’t know what you're writing until you've finished it."

Brain Implants

The Future of Brain Implants

How soon can we expect to see brain implants for perfect memory, enhanced vision, hypernormal focus or an expert golf swing?

March 14, 2014 7:30 p.m. ET
 
Brain implants today are where laser eye surgery was several decades ago, fraught with risk, applicable only to a narrowly defined set of patients – but a sign of things to come. NYU Professor of Psychology Gary Marcus discusses on Lunch Break. 

What would you give for a retinal chip that let you see in the dark or for a next-generation cochlear implant that let you hear any conversation in a noisy restaurant, no matter how loud? Or for a memory chip, wired directly into your brain's hippocampus, that gave you perfect recall of everything you read? Or for an implanted interface with the Internet that automatically translated a clearly articulated silent thought ("the French sun king") into an online search that digested the relevant Wikipedia page and projected a summary directly into your brain?

Science fiction? Perhaps not for very much longer. Brain implants today are where laser eye surgery was several decades ago. They are not risk-free and make sense only for a narrowly defined set of patients—but they are a sign of things to come. 

Unlike pacemakers, dental crowns or implantable insulin pumps, neuroprosthetics—devices that restore or supplement the mind's capacities with electronics inserted directly into the nervous system—change how we perceive the world and move through it. For better or worse, these devices become part of who we are.
Neuroprosthetics aren't new. They have been around commercially for three decades, in the form of the cochlear implants used in the ears (the outer reaches of the nervous system) of more than 300,000 hearing-impaired people around the world. Last year, the Food and Drug Administration approved the first retinal implant, made by the company Second Sight. 

Both technologies exploit the same principle: An external device, either a microphone or a video camera, captures sounds or images and processes them, using the results to drive a set of electrodes that stimulate either the auditory or the optic nerve, approximating the naturally occurring output from the ear or the eye.
Getty Images
 
Another type of now-common implant, used by thousands of Parkinson's patients around the world, sends electrical pulses deep into the brain proper, activating some of the pathways involved in motor control. A thin electrode is inserted into the brain through a small opening in the skull; it is connected by a wire that runs to a battery pack underneath the skin. The effect is to reduce or even eliminate the tremors and rigid movement that are such prominent symptoms of Parkinson's (though, unfortunately, the device doesn't halt the progression of the disease itself). Experimental trials are now under way to test the efficacy of such "deep brain stimulation" for treating other disorders as well.

Electrical stimulation can also improve some forms of memory, as the neurosurgeon Itzhak Fried and his colleagues at the University of California, Los Angeles, showed in a 2012 article in the New England Journal of Medicine. Using a setup akin to a videogame, seven patients were taught to navigate a virtual city environment with a joystick, picking up passengers and delivering them to specific stores. Appropriate electrical stimulation to the brain during the game increased their speed and accuracy in accomplishing the task. 

But not all brain implants work by directly stimulating the brain. Some work instead by reading the brain's signals—to interpret, for example, the intentions of a paralyzed user. Eventually, neuroprosthetic systems might try to do both, reading a user's desires, performing an action like a Web search and then sending the results directly back to the brain.

How close are we to having such wondrous devices? To begin with, scientists, doctors and engineers need to figure out safer and more reliable ways of inserting probes into people's brains. For now, the only option is to drill small burr-holes through the skull and to insert long, thin electrodes—like pencil leads—until they reach their destinations deep inside the brain. This risks infection, since the wires extend through the skin, and bleeding inside the brain, which could be devastating or even fatal. 

External devices, like the brainwave-reading skull cap made by the company NeuroSky (marketed to the public as "having applications for wellness, education and entertainment"), have none of these risks. But because their sensors are so far removed from individual neurons, they are also far less effective. They are like Keystone Kops trying to eavesdrop on a single conversation from outside a giant football stadium.

A boy wearing a cochlear implant for the hearing-impaired. A second portion is surgically implanted under the skin. Barcroft Media/Getty Images
 
Today, effective brain-machine interfaces have to be wired directly into the brain to pick up the signals emanating from small groups of nerve cells. But nobody yet knows how to make devices that listen to the same nerve cells that long. Part of the problem is mechanical: The brain sloshes around inside the skull every time you move, and an implant that slips by a millimeter may become ineffective. 

Another part of the problem is biological: The implant must be nontoxic and biocompatible so as not to provoke an immune reaction. It also must be small enough to be totally enclosed within the skull and energy-efficient enough that it can be recharged through induction coils placed on the scalp at night (as with the recharging stands now used for some electric toothbrushes).

These obstacles may seem daunting, but many of them look suspiciously like the ones that cellphone manufacturers faced two decades ago, when cellphones were still the size of shoeboxes. Neural implants will require even greater advances since there is no easy way to upgrade them once they are implanted and the skull is sealed back up. 

But plenty of clever young neuro-engineers are trying to surmount these problems, like Michel Maharbiz and Jose Carmena and their colleagues at the University of California, Berkeley. They are developing a wireless brain interface that they call "neural dust." Thousands of biologically neutral microsensors, on the order of one-tenth of a millimeter (approximately the thickness of a human hair), would convert electrical signals into ultrasound that could be read outside the brain. 

The real question isn't so much whether something like this can be done but how and when. How many advances in material science, battery chemistry, molecular biology, tissue engineering and neuroscience will we need? Will those advances take one decade, two decades, three or more? As Dr. Maharbiz said in an email, once implants "can be made 'lifetime stable' for healthy adults, many severe disabilities…will likely be chronically treatable." For millions of patients, neural implants could be absolutely transformative.
Assuming that we're able to clear these bioengineering barriers, the next challenge will be to interpret the complex information from the 100 billion tiny nerve cells that make up the brain. We are already able to do this in limited ways.

Based on decades of prior research in nonhuman primates, John Donoghue of Brown University and his colleagues created a system called BrainGate that allows fully paralyzed patients to control devices with their thoughts. BrainGate works by inserting a small chip, studded with about 100 needlelike wires—a high-tech brush—into the part of the neocortex controlling movement. These motor signals are fed to an external computer that decodes them and passes them along to external robotic devices.

Almost a decade ago, this system was used by a tetraplegic to control an artificial hand. More recently, in a demonstration of the technology's possibilities that is posted on YouTube, Cathy Hutchinson, paralyzed years earlier by a brainstem stroke, managed to take a drink from a bottle of coffee by manipulating a robot arm with only her brain and a neural implant that literally read (part of) her mind.

For now, guiding a robot arm this way is cumbersome and laborious, like steering a massive barge or an out-of-alignment car. Given the current state of neuroscience, even our best neuroscientists can read the activity of a brain only as if through a glass darkly; we get the gist of what is going on, but we are still far from understanding the details.

In truth, we have no idea at present how the human brain does some of its most basic feats, like translating a vague desire to return that tennis ball into the torrent of tightly choreographed commands that smoothly execute the action. No serious neuroscientist could claim to have a commercially ready brain-reading device with a fraction of the precision or responsiveness of a computer keyboard.

In understanding the neural code, we have a long way to go. That's why the federally funded BRAIN Initiative, announced last year by President Barack Obama, is so important. We need better tools to listen to the brain and more precise tools for sending information back to the brain, along with a far more detailed understanding of different kinds of nerve cells and how they fit together in complex circuits.

The coarse-grained functional MRI brain images that have become so popular in recent years won't be enough. For one thing, they are indirect; they measure changes not in electrical activity but in local blood flow, which is at best an imperfect stand-in. Images from fMRIs also lack sufficient resolution to give us true mastery of the neural code. Each three-dimensional pixel (or "voxel") in a brain scan contains a half-million to one million neurons. What we really need is to be able to zero in on individual neurons.

Zooming in further is crucial because the atoms of perception, memory and consciousness aren't brain regions but neurons and even finer-grained elements. Chemists turned chemistry into a quantitative science once they realized that chemical reactions are (almost) all about electrons making and breaking bonds among atoms. Neuroscientists are trying to do the same thing for the brain. Until we do, brain implants will be working only on the logic of forests, without sufficient understanding of the individual trees.

One of the most promising tools in this regard is a recently developed technique called optogenetics, which hijacks the molecular machinery of the genes found inside every neuron to directly manipulate the brain's circuitry. In this way, any group of neurons with a unique genetic ZIP Code can be switched on or off, with unparalleled precision, by brief pulses of different colored light—effectively turning the brain into a piano that can be played. This fantastic marriage of molecular biology with optics and electronics is already being deployed to build advanced retinal prosthetics for adult-onset blindness. It is revolutionizing the whole field of neuroscience.

Advances in molecular biology, neuroscience and material science are almost certainly going to lead, in time, to implants that are smaller, smarter, more stable and more energy-efficient. These devices will be able to interpret directly the blizzard of electrical activity inside the brain. For now, they are an abstraction, something that people read about but are unlikely to experience for themselves. But someday that will change. 

Consider the developmental arc of medical technologies such as breast surgery. Though they were pioneered for post-mastectomy reconstruction and for correcting congenital defects, breast augmentation and other cosmetic procedures such as face-lifts and tummy tucks have become routine. The procedures are reliable, effective and inexpensive enough to be attractive to broad segments of society, not just to the rich and famous.

Eventually neural implants will make the transition from being used exclusively for severe problems such as paralysis, blindness or amnesia. They will be adopted by people with less traumatic disabilities. When the technology has advanced enough, implants will graduate from being strictly repair-oriented to enhancing the performance of healthy or "normal" people. They will be used to improve memory, mental focus (Ritalin without the side effects), perception and mood (bye, bye Prozac).

Many people will resist the first generation of elective implants. There will be failures and, as with many advances in medicine, there will be deaths. But anybody who thinks that the products won't sell is naive. Even now, some parents are willing to let their children take Adderall before a big exam. The chance to make a "superchild" (or at least one guaranteed to stay calm and attentive for hours on end during a big exam) will be too tempting for many.

Even if parents don't invest in brain implants, the military will. A continuing program at Darpa, a Pentagon agency that invests in cutting-edge technology, is already supporting work on brain implants that improve memory to help soldiers injured in war. Who could blame a general for wanting a soldier with hypernormal focus, a perfect memory for maps and no need to sleep for days on end? (Of course, spies might well also try to eavesdrop on such a soldier's brain, and hackers might want to hijack it. Security will be paramount, encryption de rigueur.)

An early generation of enhancement implants might help elite golfers improve their swing by automating their mental practice. A later generation might allow weekend golfers to skip practice altogether. Once neuroscientists figure out how to reverse-engineer the end results of practice, "neurocompilers" might be able to install the results of a year's worth of training directly into the brain, all in one go.

That won't happen in the next decade or maybe even in the one after that. But before the end of the century, our computer keyboards and trackpads will seem like a joke; even Google Glass 3.0 will seem primitive. Why would you project information onto your eyes (partly occluding your view) when you could write information into your brain so your mind can directly interpret it? Why should a computer wait for you to say or type what you mean rather than anticipating your needs before you can even articulate them?
By the end of this century, and quite possibly much sooner, every input device that has ever been sold will be obsolete. Forget the "heads-up" displays that the high-end car manufactures are about to roll out, allowing drivers to see data without looking away from the road. By the end of the century, many of us will be wired directly into the cloud, from brain to toe.

Will these devices make our society as a whole happier, more peaceful and more productive? What kind of world might they create?

It's impossible to predict. But, then again, it is not the business of the future to be predictable or sugarcoated. As President Ronald Reagan once put it, "The future doesn't belong to the fainthearted; it belongs to the brave." 

The augmented among us—those who are willing to avail themselves of the benefits of brain prosthetics and to live with the attendant risks—will outperform others in the everyday contest for jobs and mates, in science, on the athletic field and in armed conflict. These differences will challenge society in new ways—and open up possibilities that we can scarcely imagine.

Dr. Marcus is professor of psychology at New York University and often blogs about science and technology for the New Yorker. Dr. Koch is the chief scientific officer of the Allen Institute for Brain Science in Seattle.

Wednesday, March 12, 2014

FDA Approves Electric Headband to Prevent Migraine

http://abcnews.go.com/Health/wireStory/fda-approves-electric-headband-prevent-migraine-22866986



The Food and Drug Administration said Tuesday it approved a nerve-stimulating headband as the first medical device to prevent migraine headaches.

Agency officials said the device provides a new option for patients who cannot tolerate migraine medications.

The Cefaly device is a battery-powered plastic band worn across the forehead. Using an adhesive electrode, the band emits a low electrical current to stimulate nerves associated with migraine pain. Users may feel a tingling sensation on the skin where the electrode is applied. The device is designed to be used no more than 20 minutes a day by patients 18 years and older.

A 67-person study reviewed by the FDA showed patients using the device experienced fewer migraines per month than patients using a placebo device. The Cefaly headband did not completely eliminate migraine headaches or reduce the intensity of migraines that occurred.

About 53 percent of 2,313 patients in a separate study said they were satisfied with the device and were willing to purchase it for future use.

No serious adverse events were connected with the device.

Cephaly is manufactured by Cephaly Technology of Belgium.

Monday, March 10, 2014

Friday, March 7, 2014

Lego Movie - Duplo aliens


"We are from the planet DUPLO, and have come to destroy you!"
―DUPLO Aliens



http://lego.wikia.com/wiki/DUPLO_Aliens

DUPLO Aliens are creatures that appear in The LEGO Movie.They are mainly built out of DUPLO pieces.

They appear at the end of the movie, after Finn's father, "The Man Upstairs", states that Finn's little sister will be allowed to play with the LEGO as well. They are beamed down from a ship into Bricksburg, and announce their intentions to invade. It is possible that they where defeated after the movie. 

Interview with Writer/Director team, Phil Lord & Chris Miller
http://www.imdb.com/video/hulu/vi4176259865?ref_=nm_rvd_vi_2 



http://en.wikipedia.org/wiki/The_Lego_Movie

UniKitty clip

http://lego.wikia.com/wiki/File:The_LEGO_Movie_-_%22Cloud_Cuckoo_Land%22_Clip

Good Morning clip


http://lego.wikia.com/wiki/File:The_LEGO_Movie_-_%22Good_Morning%22_Clip

Everything is Awesome clip

http://lego.wikia.com/wiki/File:The_LEGO_Movie_-_%22Everything_is_Awesome%22_Clip

http://en.wikipedia.org/wiki/Everything_Is_Awesome!!!


International Business Times described the song as a parody of creeping fascism, saying that the song is a "little more than an infectiously catchy parody of watered-down radio pop, right down to the faux-dubstep breakdown. There’s a lot more happening under the surface, however."[6]

http://www.metrolyrics.com/everything-is-awesome-lyrics-tegan-and-sara.html

Lyrics


Everything is awesome
Everything is cool when you're part of a team
Everything is awesome, when we're living our dream
Everything is better when we stick together
Side by side, you and I gonna win forever, let's party forever
We're the same, I'm like you, you're like me, we're all working in harmony
Everything is awesome
Everything is cool when you're part of a team
Everything is awesome, when we're living our dream
Wooo
3, 2, 1. go
Have you heard the news, everyone's talking
Life is good 'cause everything's awesome
Lost my job, it's a new opportunity
More free time for my awesome community
I feel more awesome than an awesome opossum
Dip my body in chocolate frostin'
Three years later, washed out the frostin'
Smellin' like a blossom, everything is awesome
Stepped in mud, got new brown shoes
It's awesome to win, and it's awesome to lose (it's awesome to lose)
Everything is better when we stick together
Side by side, you and I, gonna win forever, let's party forever
We're the same, I'm like you, you're like me, we're all working in harmony
Everything is awesome
Everything is cool when you're part of a team
Everything is awesome, when we're living our dream
Blue skies, bouncy springs
We just named two awesome things
A nobel prize, a piece of string
You know what's awesome, everything
Dogs and fleas, allergies, a book of Greek antiquities
Brand new pants, a very old vest
Awesome items are the best
Trees, frogs, clogs
They're awesome
Rocks, clocks, and socks
They're awesome
Figs, and jigs, and twigs
That's awesome
Everything you see, or think, or say
...Is awesome
Everything is awesome
Everything is cool when you're part of a team
Everything is awesome, when we're living our dream

Genomic Medicine - DNA Sequencing

http://news.yahoo.com/dawning-age-genomic-medicine-finally-213006405--finance.html;_ylt=AwrBEiJE6hhTvjAAKkDQtDMD

The dawning of the age of genomic medicine, finally

Reuters
Venter speaks during a symposium on "The Future of Genomic Medicine" at Scripps Seaside Forum in La Jolla

Craig Venter (R) speaks with Eric Topol, Scripps Health chief academic officer and director of the Scripps …
 
By Julie Steenhuysen

LA JOLLA, California (Reuters) - When President Bill Clinton announced in 2000 that Craig Venter and Dr. Francis Collins of the National Human Genome Research Institute had succeeded in mapping the human genome, he solemnly declared that the discovery would "revolutionize" the treatment of virtually all human disease.

The expectation was that this single reference map of the 3 billion base pairs of DNA -- the human genetic code -- would quickly unlock the secrets of Alzheimer's, diabetes, cancer and other scourges of human health.

As it turns out, Clinton's forecast was not unlike President George Bush's "mission accomplished" speech in the early days of the Iraq war, said Dr. Eric Topol of Scripps Translational Science Institute, which is running a meeting On the Future of Genomic Medicine here March 6-7.

Thirteen years after Clinton's forecast, even Venter acknowledges that mapping the human genome has had little clinical impact. "Yes, there's been progress, but we all would have hoped it would have been more rapid," he said in an interview in his offices this week.

But that is finally changing.

"We are at an inflection point," said Collins, who now directs the National Institutes of Health. In a telephone interview, he said he never expected an "overnight, dramatic impact" from sequencing the human genome, in part because of cost.

Recently, a combination of lower-cost sequencing technology and a growing list of wins in narrow corners of medicine are starting to show that genomic medicine is on the verge of delivering on at least some of those early claims.

Recent advances in sequencing have been "pretty stunning" and genomics is "just on the threshold" of delivering results, Venter told Reuters.

Although much is left to be learned about the genome, scientists believe knowing a person's genetic code will lead to highly personalized treatments for cancer, better predictions for diseases in babies and help unlock the puzzle of mysterious genetic diseases that currently go undiagnosed and untreated.

Venter is staking his latest entrepreneurial venture on that expectation. Earlier this week, he announced formation of a new company, Human Longevity Inc., to undertake a massive project: sequencing 40,000 human genomes a year in a search for new therapies to preserve health and fight off diseases, including cancer, heart disease and Alzheimer's.

To do that, Human Longevity will use two HiSeq X Ten machines and has an option to buy three more. The sequencers, made by Illumina Inc., can map a single genome for as little as $1,000.
Collins' government-funded Human Genome Project spent $3 billion and took 13 years to sequence the human genome.

Breaching the $1,000 genome could prove to be a watershed. At that cost, said Illumina Chief Executive Jay Flatley, ambitious projects like Venter's are economically feasible and clinical results more achievable.

"We've still only scratched the surface of what the genome holds," he said. "What we need to do now is get hundreds of thousands to millions of genomes in databases with clinical information," he added.

MAKING A DIFFERENCE

Advances in sequencing equipment and the advent of next-generation sequencing has transformed the work Dr. Elizabeth McNally does as director of the Cardiovascular Genetics Clinic at the University of Chicago.

In seven short years, she said, her group has gone from testing just one gene at a time to testing 60 to 70 genes and she is moving quickly into whole genome sequencing.
McNally points to the case of Jeanne Sambrookes - a patient who is alive today because of these advances.

As a child, Sambrookes often noticed the distinct, hunched posture of her mother, her aunt and her grandmother as they struggled to climb a flight of stairs.

Sambrookes had been very athletic as a young teen, but as she matured, she noticed a heaviness in her legs. By age 20, running left her tired. At 40, she needed a pacemaker, just like her mother did at that age.

"I started thinking there is something to this," said Sambrookes, now 56, who lives in Michigan City, Indiana.

After some dead ends, she found McNally, who cast a wide net, testing for more than two dozen genes that could account for Sambrookes' heart and muscle problems.

The culprit turned out to be a mutation in a gene called Lamin that causes Limb-girdle muscular dystrophy. The disease can cause weakness and wasting of the muscles between the shoulders and knees. The mutation can also cause electrical disturbances of the heart.

McNally recommended Sambrookes replace her pacemaker with an implantable cardiac defibrillator that could protect against sudden cardiac death.
That proved to be the right call. Last August, Sambrookes' heart stopped three times. Each time, the defibrillator shocked her back to life.

"She literally tried to die three times," McNally recalls of her patient. "It still takes my breath away."
Although McNally uses panels of 70 to 80 genes in her clinic, she has started experimenting with whole genomes. With the reduced cost of gene mapping, whole gene sequencing is a potentially cheaper, more powerful tool.

The reduced cost of mapping is cutting the cost of research, too -- another factor that could speed clinical outcomes. McNally's team recently published a paper in the journal Bioinformatics in which she used Beagle, a supercomputer housed at Argonne National Laboratory, to analyze 240 full genomes in about two days. Such an endeavor normally takes months.

"That dramatically decreases the cost associated with analysis because we sped up the time," said McNally.

CORNERS OF MEDICINE
Dr. Jay Shendure, associate professor of Genome Sciences at the University of Washington in Seattle, said the impact of gene sequencing is beginning to emerge in specific areas -- after a startup period that was longer and narrower than expected.

"I do think there are these corners of medicine, which are important ones, that may happen relatively quickly," he said.

A key example is the use of a pregnant woman's blood to see if her fetus may have trisomies -- chromosomal abnormalities associated with Down syndrome and other disorders.

"Almost overnight, sequencing is in the process of taking over as the primary means of screening for trisomies in at-risk populations, and maybe eventually to everyone," Shendure said.

The clinical results are promising. A trial of Illumina's test published last week in the New England Journal of Medicine found about 3.6 percent of standard tests for trisomies had false positive results, compared with 0.3 percent with Illumina's Verify test.

That means fewer women would need to go through invasive follow-up diagnostic tests using amniocentesis or chorionic villus sampling, both of which can cause miscarriages.

If the tests become routine practice, Goldman Sachs analyst Issac Ro estimates the market could reach $6 billion a year.

Venter's new company, Human Longevity, has picked cancer as its first sequencing target. Working with the University of -California, San Diego, the company plans to sequence the genomes, as well as the tumors, of every cancer patient treated at UCSD's Moores Cancer Center.

Collins calls cancer a "disease of the genome" and notes that genomics has revealed cancer to be a collection of different mutations, all of which contribute to its growth.

Drug companies have responded with treatments that block aberrant pathways, an approach called precision medicine.

"That's happened pretty quickly because of this window that DNA sequencing has provided," said Collins.

(Reporting by Julie Steenhuysen; Editing by David Greising and Dan Grebler)