Martha Stewart attends the "Get On Up" premiere at The Apollo Theater on July 21, 2014 in New York City. Jemal Countess—Getty Images
Because it's a useful tool. And imagine what Louis XIV could have accomplished at Versailles if he'd had one
In
just a few minutes I was hooked. In near silence, the drone rose,
hovered, and dove, silently and surreptitiously photographing us and the
landscape around us. The photos and video were stunning. By assuming
unusual vantage points, the drone allowed me to “see” so much more of my
surroundings than usual. The view I was “seeing” on my iPad with the
help of the drone would have otherwise been impossible without the use
of a private plane, helicopter, or balloon. With any of those vehicles, I
would have needed a telephoto lens, and all of them would have made an
unacceptable commotion on the beach. What’s more, I would not have been
in the photos!
So much has been done in the past without drones, airplanes, hot air
balloons, or even extension ladders. It is hard to imagine André Le
Nôtre laying out the exquisite landscape designs for Vaux-le-Vicomte,
and later the magnificent Château de Versailles, with no high hill to
stand on, no helicopter to fly in, and no drone to show him the
complexities of the terrain. Yet he did, and with extreme precision,
accuracy, and high style.
Earlier, Henri IV drew up complicated plans for the immense and
elegant redesign of Paris, capital of France. In England, Capability
Brown somehow had the innate vision and perspicacity to reconfigure
thousands of acres into country estates fit for royalty. He and Sir
Humphry Repton invented an entirely new style of landscape design that
had little to do with the grand châteaux of France. It became all about
the “axis of vision” — relaxed, looming views of the distance that,
without an aerial view, required the utmost in fertile imagination.
In the late 1800s, more people wanted the bird’s eye view of city and
country and went to extreme lengths to rig up guy-wired telescoping
towers, build extension ladders of dangerous lengths, and man hot air
balloons, from which intrepid photographers could capture remarkable
images—such as those of the Chicago Union Stock Yards and the U.S. Steel
Corporation—from heights of 2,000 feet.
What about the Great Wall of China, or the Nazca Lines in southern
Peru? I began reflecting on how the engineers and architects of the past
accomplished so much without the modern tools we have at our disposal.
My mind started racing and I imagined all the different applications
for my drone. I knew that every type of use had already been thought of
by others (governmental agencies, businesses, Amazon.com, Google Maps),
and I knew I could not even begin to fathom even a fraction of the
social, ethical, and political challenges the widespread use of drones
would create.
Do they raise legitimate privacy concerns? Should they be regulated? Should we have a national debate?
I don’t have all the answers. But I forged ahead, using a Parrot AR Drone 2.0,
photographing my properties, a party, a hike in the mountains, and a
day at the beach. I did my best to master the moves and angles that
would result in most arresting pictures and video. An aerial photo of Martha Stewart’s farm in Bedford, New York, taken with her drone.Martha Stewart
One of my farm workers used his drone, a DJI Phantom flying camera,
to capture amazing images of my 153-acre farm in Bedford, New York.
Suddenly we could see with astonishing clarity the layout of the open
fields, the horse paddocks, the chicken coops, the greenhouses, the hay
barn, the cutting gardens and henhouses, the clematis pergola, and the
long allée of boxwood. The photos were so good I posted them to my blog
on Marthastewart.com. The response was phenomenal!
Henry Alford wrote a satirical essay about me and my drones in The New Yorker
that was really funny but missed the point about why I love my drone.
Drones can be useful tools, and I am all about useful tools. One of my
mottos is “the right tool for the right job.”
A few facts:
The hobbyist drones we can all purchase online or in stores are
technically known as UAS: unmanned aerial systems. Many can fly up to
900 feet. With practice, a novice photographer can take really great
photos.
The shots of my farm were breathtaking and showed not only a very
good landscape design — thanks to the surveyors and landscapers who
worked with me on the overall vision, much as le Notre worked with Louis
XIV — they also showed me what more I can do in the future, and
revealed unexpected beauty.
An aerial shot of the vegetable garden looked very much like my Peter
Rabbit marzipan embellished Easter cake, which was designed without the
help of a drone.
Martha Stewart, founder of Martha Stewart Living
Omnimedia and Emmy Award-winning television show host, entrepreneur and
bestselling author, is America’s most trusted lifestyle expert and
teacher.
It's a one-wheeled, self-balancing electric skateboard called (appropriately) the Onewheel.
You can't buy one right now. They've already shipped all of their first
production runs and still have Kickstarter backers' orders to fulfill.
After that, though, they might make one for you -- if you come up with a
deposit of $500 against a total price of $1499.
Plus shipping. This may seem like a lot of money to some people, but
enough folks have found it reasonable that Onewheel has sold out not
just its first production run but also the second one. Their Kickstarter success
was nothing short of amazing, with $630,862 raised although their goal
was only $100,000. Inventor Kyle Doerksen is the man behind Onewheel,
but he's also one of the people behind Faraday Bicycles,
whose flagship model costs $3500 -- and whose initial production run is
also sold out -- which means there are people around who are willing to
pay $3500 for an electric bicycle instead of putting a motor kit on a
used Schwinn for a total cost of less than $500 (with a little careful
shopping). Alternate video link.
Researchers have found that an injection of protein FGF1 stops weight induced diabetes in mice,
with no apparent side effects. However, the cure only lasts 2 days at a
time. Future research and human trials are needed to better understand
and create a working drug. From the story: "The team found that
sustained treatment with the protein doesn't merely keep blood sugar
under control, but also reverses insulin insensitivity, the underlying
physiological cause of diabetes. Equally exciting, the newly developed
treatment doesn't result in side effects common to most current diabetes
treatments."
Why motion design is now a required skill for designers.
Last week I attended Google I/O for the first time and participated
on a small panel about cross-platform design challenges. There was so
much going on that it was a bit of sensory overload, much like walking
down the Las Vegas strip for the first time. Google announced many
welcomed Android improvements such as a battery saver mode and
lock-screen notifications; something you'd previously need to use
add-ons for as mentioned in Android is better.
More uses of the Android operating system emerged: Android Wear,
Android Auto and Android TV. A smartphone won't be the only thing that
comes to mind when someone says Android. It'll be this family of screens
from couch to car to wrist.
“If there were no constraints, it’s not design — it’s art.” — Matias Duarte
With Android and other such Google products now being used in more
contexts it became necessary for Google to step back and voraciously
think through their design. The resulting visual design language was
dubbed Material Design.
At a high-level it introduces constraints to craft a framework within
which Google and others building on top of Android can more easily make
design decisions.
However, the real news from Google I/O wasn't about Android or Material Design itself. It was Google's
implicit announcement that motion design is now a huge, required
component for creating great software for mobile, desktop and wearable
devices. Motion was mentioned in every design session at I/O. This coming from what has historically been a developer-focused event.
A year ago I had a half-written post sitting in my drafts folder
called “The right tool for the job.” The gist of it was using a suite of
tools during your design process to effectively communicate the
entirety of your intended design. It was going to be about showing
animations and transitions with tools like After Effects, Quartz
Composer and building HTML/CSS/JS prototypes to interact with on your
mobile device.
This was around the time Facebook made waves in the design community when they discussed how their design process for FB Home included Quartz Composer:
Not only does QC make working with engineers
much easier, it’s also incredibly effective at telling the story of a
design. When you see a live, polished, interactable demo, you can
instantly understand how something is meant to work and feel [...] Julie Zhou
At the time incorporating such attention to motion and
gesturally-interactive prototype work in your design process may have
seemed nascent; if not entirely optional unless you wanted to customize
everything and add another level of interaction detail.
"Carefully choreographed motion design
can effectively guide the user’s attention and focus through multiple
steps of a process or procedure; avoid confusion when layouts change or
elements are rearranged; and improve the overall beauty of the
experience.”
Motion can and should go beyond a veneer of polish or delight. It's
another avenue for adding personality, educating your users about how to
interact with particular elements and for creating a story for the
user.
Changing an entire page on the user requires them to re-scan
everything to see what has changed. This affords an opportunity to
choreograph, or string together several transitions to provide context
around what is changing.
For example, Google has described much of their motion in terms of ripple
choreography: using a sequence of small, delayed transitions as an
affordance to express the transfer of energy from the user to the
system. By connecting user actions to the resulting change you can
improve the user's understanding of the relationship between spaces.
Design tools
One of the questions Roman Nurik
asked us on the design panel was about how to best present your designs
to others. This spurred a conversation on the power of functional
prototypes.
Though when you think of the term prototype in the context of design
process over the last 5 years, more often than not the first thing that
came to mind was something rudimentary like linking a few pages of a
flow together with tap targets. Fast-forward to today where prototypes
for me mean experiences that can just about fool someone into thinking
they are real apps when put on a mobile device — real page transitions,
draggable elements, scrollable areas, animations, keeping track of state
where necessary and so on.
In the past it probably wasn't the best use of a designer's time to
recreate designs in a tool like Adobe After Effects. Doing anything
beyond sliding in new page might have even been considered polish.
Polish is a dangerous word as it implies that it's not vital and if
it's not vital it's likely to be cut from the project when deadlines get
tight.
Instead After Effects was used to detail new microtransitions or
object transformations. That was about it — tinkering with small, more
complex nuggets of an experience. Beyond that it was easy to communicate
with engineering teams about how the rest of the flow was supposed to
work. This modal falls down, this page slides in.. standard app page
transitions and the like.
Times are changing. Things like page transitions will still exist but involve more of the elements on each page. You'll begin choreographing.
In the next few years consideration for motion will be required to be a
good citizen of your desktop/mobile/wearable/auto/couch platform. It
will be an expected part of the design process just like people will
begin to expect this level of activity and character in software.
One of the popular questions at Google I/O design sessions was how
designers should go about incorporating motion into their design
process. Googlers mentioned that they personally use After Effects but
mainly only for microtransitions, things like loaders and icons
transforming. The also mentioned their own Polymer web framework that
includes the new Material Design UI components.
In short — there was no good answer. There's a huge opportunity here for new tools to cater to budding new choreographers.
Polymer can help with choreography by including things like animating along a path and some affordances for sequencing animations
but the components are only great if you're using the material design
components exactly as they are and don't need any customization.
I have been using Framer.js on an
almost daily basis to build interactive prototypes of my designs. It's
basically a JavaScript animation framework and can take some time to get
up to speed if you're not comfortable with JavaScript. However, unlike
other tools anything you learn about JavaScript while using Framer is
applicable for web development in general.
Framer is exceptional at testing out small bits of interaction or
linking together several pages of a flow. But as a next generation tool
with more needs for managing choreography, keeping track of state, and
working with draggable and scrollable elements, you incur significant
overhead for managing your code. I found myself creating views to manage
other views, much like I used to do with complex pages when I was
building Backbone.js apps, but I digress...
There are more WYSIWYG tools like Pixate,
which lets you use a drag-and-drop web app to create your prototypes,
then view live on your device. But without a preview mechanism on the
web this seems to slow development down with constantly having to
publish to the device.
I'm still waiting on the right tool for this new mix of motion and
interactive prototyping. Building your design also makes you think about
how it should be built and the constraints of the design; things you
might have only run into later when it was actually being developed. And
of course one thing's for sure: putting a real prototype in front of
your team is the best form of communication. No more explaining your
design to others by trying to talk through it .. "then you tap this, and
this happens and that loads, then you slide this.."
What are you trying to say Stammy
It’s a great time to be a designer. We have never had so many capable
platforms to develop on, nor as many ways to use our products and on so
many new categories of devices.
The more designers we have thinking about motion the more we'll have a
need for great design tools and the better design tools we have, the
easier it will become to build our designs as intended. And with that
we'll have more delightful and easier-to-use products that set their
users up for success so they can solve the problems you set out to solve
for them.
To design is to communicate clearly by whatever means you can control or master. Milton Glaser
A London based company, This Place, is launching a new app "MindRDR"
for providing one more way for controlling Google Glass. It will allow
the users to control the Google Glass with their thoughts. This MindRDR
application bridges the Neurosky EEG biosensor and Google Glass. It allows users to take photos and share them on Twitter and Facebook by simply using brainwaves alone. This Place has put the code of this app on GitHub for others to use it and expand on it.
Google Glass has made a name for itself (somewhat infamously) as head-mounted
hardware that you can control with your voice and a sliding finger.
Now, a team based out of interactive studio This Place in
London, is launching a new app that it hopes will kickstart an even
more seamless way of interacting with the device: with the power of your
mind.
MindRDR, as the app is called, links up Google Glass with another piece of head-mounted hardware, the Neurosky EEG biosensor, to create a communication loop.
The Neurosky biosensor picks up on brainwaves that correlate to your
ability to focus. The app then translates these brainwaves into a meter
reading that gets superimposed on the camera view in Google Glass. As
you “focus” more with your mind, the meter goes up, and the app takes a
photograph of what you are seeing in front of you. Focus some more, and
the meter goes up again and the photo gets posted to Twitter. Like this:
and this:
It’s an early, and somewhat primitive vision of how your mind can control Glass.
Yes, there are devices out there that have even more sensors on them,
although that can start to get very expensive (the Neurosky retails for
£71 in the UK, while Google Glass costs £1,000 and the app is free).
And to be honest, the current hook-up is pretty primitive, too. When I
arrived for a demonstration earlier today, one of This Place’s account
managers was cooling Glass down under the air conditioner.
And that’s before you start to put on two different bits of headgear. It can be a little clumsy.
But all this isn’t the point: The idea here is that this is a minimum
viable product, a first step that can be developed further — for
example, to create applications to “train” people to concentrate better,
or to play games, maybe to help suggest places to get a coffee when
your sensor picks up that you’re tired, or for medical applications, for
example for people with mobility problems.
And potentially, you could build out the basic concept with more,
lighter and easier-to-use sensors. This Place says that among those who
have taken an interest are Stephen Hawking, the famous physicist who is
nearly paralysed because of a progressive motor neuron disease.
To that end, while This Place continues its own development, it has also put the code up on GitHub for others to use it and expand on it, as well.
Visiting This Place earlier today for a demonstration,
Chloe Kirton, This Place’s creative director who had originally
conceived of MindRDR, told me that the idea is somewhat related to the
kind of work her colleagues do every day for paying clients.
(MindRDR, to be clear, is not a paid project and was not
developed for any client; rather it’s in the vein of other London-based
creative agencies like UsTwo, where employees are encouraged to work on
creative projects that are completely outside of their day-to-day client
work.)
A typical project for This Place, she says, is working on
user experience and user interfaces for large Internet properties. “When
touchscreens first became mainstream it forced the tech industry to
really rethink the user experience,” she says. “Could this become the
basis of a new kind of user interface? Could the future be about an
interface that disappears altogether?”
Part of the interest, too, came out of Kirton’s awareness of some of Google Glass’s shortcomings.
“We saw the problems,” she says. Speaking out loud to your
device is unnatural and could be downright awkward in some cases. And
the finger sliding and tapping is not great, either. “After a while your
arm gets tired,” she says. “You get Glass elbow. We wanted to think of
something that was natural and accessible for everyone.”
Google Glass, for all the glasshole drawbacks,
has become a reference point that has inspired some interesting
applications and concepts for where wearable technology may take us in
the future. That’s included ways to use Glass to pay for things, and how Glass can be used by doctors
and other clinicians. Kirton says that MindRDR is so far the only app
that links up Google Glass with brainwave-reading technology.