Designer & developer communication – Google I/O 2016

Designer & developer communication – Google I/O 2016


SHONA DUTTA: Hi everyone
and welcome to this session of Google I/O 2016 Designer
and Developer Communication. So we’ll be talking about
both visual design and motion design, with a focus
on visual design first. So my name is Shona and I’m
an Android visual designer. KIRILL GROUCHNIKOV: Welcome
everybody, I’m Kirill. I’m a UI interface engineer on
the UI toolkit team on Android. JOHN SCHLEMMER:
I’m John Plummer. I’m the motion lead on
the material design team. MARK WEI: And I’m Mark. I’m a motion engineer
on material design. SHONA DUTTA: So as I
mentioned, my name is Shona and I’m a visual designer
on the Android Design Team. As a designer I get to
have a lot of fun coming up with really cool solutions to
tough user experience problems. But as I’m creating
those solutions, I also need to be thinking about
how our talented engineering team is going to be
implementing them. So at Google for every
design that’s created, we also create a
spec to go with it. Together we’re going
to take a close look at what makes a spec
successful, the assets that we create to support
that spec, and also implementation throughout
process, communication, all of that. So let’s get started with specs. A spec or a specification
is a common way for designers to document the
exact dimensions, distances, colors of objects, and
text within the design. But when we say that a
designer hands off a spec, we don’t mean that they get
to wipe their hands clean as soon as it hits engineering. Implementation of a
spec is a conversation that exists between
design and engineering and it ensures that we’re
tweaking and maneuvering to make the most out of
the Android platform. To make a spec designers need
to understand some basics about the Android
platform first. First of all there, are
a ton of screen sizes that we need to account for. Fortunately, we don’t need
to make unique designs for every one of
the screen sizes. That might drive us
a little bit crazy. Instead, we can rely on density
independent pixels, or dips to ensure that we can
specify one design that will work across a
variety of screen sizes. Dips are flexible units that
scale to uniform dimensions across these screens. Here’s an example that shows
a floating action button across a variety of screens. KIRILL GROUCHNIKOV: I’ve been
doing Android development for over six years. For a little bit
over six years, and I think the most important
part for me to remember, and I hope you also remember,
that Android ecosystem is this continuum of screen
sizes, screen densities, screen resolutions, aspect ratios,
and pretty much every hardware aspect that you can think
of, there’s a variety, there’s this continuum. Much like the world
of web development, web design of the
last 20, 25 years that doesn’t operate on
the fixed size of 800 by 600 pixels, the same applies
to the world the Android design and Android development. You don’t design for
the particular screen that you have in your
hand, just because you love that specific form factor,
be it a four inch, five inch, or maybe a 10 inch tablet. Instead, the layout
solution, the layout spec, the way you
represent information in your particular application,
your particular flow, needs to adapt to reflow to
the amount of screen space that you have. As you can see here
on the left, where you don’t have a lot of screen
estate on a smaller phone, and you go to this vertical
list with smaller images, and then as you get
a bigger canvas, tablets size canvas,
desktop size canvas, you can go to a multi-column
grid with larger images. And that’s just
one example of how the representation of
your content adapts, reflows, responds to
this larger canvas. In the world of web design,
it’s called responsive design and breakpoints where the
layout, the presentation responds to the changes and the
size of your browser window. And break point is that part
where you say now my screen is sufficiently larger to
switch to a different way of representing
this information. In the world of
Android up until now, we chose that you call
it adaptive design, mostly because you don’t really
resize an Android application window. It kind of goes in a full
screen, apart from the Status bar and Navigation bar. This is changing,
and Android and where we are introducing the
split screen functionality for phones as well
as for tablets, where you don’t know
which parts of the screen your app is going to occupy. And it becomes
ever more important for your design solution, for
your implementation solution to scale gracefully to
that space on the screen. SHONA DUTTA: So one
aspect of the continuum that Kirill is talking
about is screen density. Dips are our primary way of
approaching different screen densities. So let’s talk about how
best to leverage them. To design using dips for
phones in portrait mode, we use artboards that
are rendered at mdpi. So when I say artboards, I’m
talking about in Illustrator. And this is with the
screen size of 360 by 640. Of course we still need to
create additional designs for tablet layouts,
desktop, other form factors. But for the purposes
of this session, we’ll be focusing on a
phone in portrait mode. When we’re ready to
export these screens, we can do so at
1.5, 2, 3, and 4x to create mockups
that are appropriate for different device densities. So in this example, I
have created a mock at 360 by 640, which is mdpi. And then I’ve exported it at
3x to create a screen that is xx each dpi sized. That means that when I
put the mock on my phone, it’ll show up very
crisp and clear, just like it will in reality
when it’s implemented. There are few reasons why we
work at mdpi from the beginning instead of at a larger density. Firstly, working at
mdpi ensures that users on smaller or lower
density screens will still be able
to read all the text and tap on appropriately
sized buttons. Secondly, it’s easy to
export mocks and assets at higher resolutions after
they’re designed and mdpi. It’s much harder to reduce
resolution and still count on your designs
being crisp and clean. So in this example we’ve
taken a larger asset and scaled it down, and you can
see that the edges are blurry. Thirdly, when you’re
working on 100 artboard file in Illustrator, and
each of those artboards is 1080 by 1920,
even vector artwork can get pretty sluggish. So instead, we keep those
files manageable by working at a smaller resolution first. KIRILL GROUCHNIKOV: So as
Shona already mentioned, device independent
pixels or dips are one of the most basic and
one of the most important units or concepts that we have
in the world of Android. It allows you to abstract
away the physical resolution of the screen and operate on
just the right level on just the right units. I remember I joined the
Android team in 2009 just before Nexus One was announced. And that was kind of like
top of the line hdpi screen. Back then we had
ldpi, mdpi, and hdpi. And now we have
double x, triple x, and who knows what will happen
over the next five years? I don’t know. No comment. And it becomes ever more
important to remember that you don’t operate in pixels. Instead, you operate in dips. So every mock that you
get from your designers needs to be implemented, all
the margins, all the sizes, all the paddings in dips, unless
you’re absolutely sure that you want to use pixels for something
like hairline separators. But even those tend to
disappear on double x, triple x, hpdi screens. So that might not
be the best idea. The only exception to you
using dips everywhere is text. For text, we should be using
scale independent pixels, or sp, not sip, sp. The only difference
between dips and sp’s is that in addition
to abstracting away the physical resolution
of the screen, a user is also able to go into
the global device settings and bump up, globally
bump up or bump down, globally the text size
across all the apps. And we should respect that
across the entire system. So in your implementation,
everything should sp units. Finally, everything
that can be interacted with on your screen,
no matter how big or how small the screen itself
is, should be at least 48 by 48 dips. And unlike maybe some parts
of material design guidelines, this is not a guideline. This should be a
very hard requirement that you should not budge from. So as you can see here, an
example of maybe a smaller asset that you get
from your designer, they search icon, which
might be 24 by 24 dips, but still when you put it in
your actual XML file however, you operate at that level, it
might be in a visual designer, it might be hand coding. You should make sure
that the actual typeable area around that icon
is 48 by 48 dips, which you can do with paddings,
you can do it with margins. Or you can go back to your
designer and say well actually, the easiest for us would
be to cut that asset at 48 by 48 instead of 24 by 24. SHONA DUTTA: As Kirill
mentioned, typography is specced using sp’s,
scale independent pixels. Unlike dips, these
scale based not only on the user’s screen
density, but also on their font preferences. So if the user has Large
Text mode turned on for improved
accessibility, then we can ensure that your app will
observe this and respect it. Now that we’ve gone over
the units of measure for typography, let’s take a
look at a concrete example. This is an example
of a type spec. So you can see that we have
indicated font, color, size, and opacity. So these are all key
pieces of information to communicate to engineering. However, there is one
large piece that’s missing. That is text
placement with regard to other elements on the screen. So in the material spec
they do a great job of making sure that we
have information about how to align text two key lines. And it talks about line
spacing between lines of text. However there are times
at which designers will need to spec text placement
that aren’t necessarily covered in the material design spec. And in those cases it can
be a little bit difficult. This is because design
applications like Photoshop, Illustrator, and Sketch
all treat text bounding boxes a little bit differently. They tend to add their own
padding a little bit at a time, and there’s some
variety in between them. So for this reason, it
can be misleading to try to use text bounding boxes to
place text within a design. Instead of using
the bounding boxes, we recommend that you
use text baselines. By measuring from the
top of a container to the first baseline
in the block of text, and then from the last baseline
in the same block of text to the bottom of
the container, we can accurately place the
text within the container. KIRILL GROUCHNIKOV: So baseline,
at least in Latin scripts is where kind of the
bulk of the text resides, it’s kind of this virtual line. This is not true for all
the scripts out there. Some index scripts have a little
bit more vertical variety. The baseline is slightly
below what we would usually consider if you come from
the world of Latin scripts, what we would usually consider
where the bulk of the text resides. We have the ascender line
and the descender line, which again for
Latin scripts, that is kind of like the true
bounding box of the text. But if you go beyond the world
Latin scripts and go global, there are absolutely
scripts out there that have some parts
of their glyphs some parts of their
characters going above the ascender line, below
the descender line, or both. And these are just
a few examples from around the
world of scripts that have a little bit more
vertical variety in the rhythm. And some developers that
want to have a little bit more precise control over
placement and alignment of text, choose to use the
Android Include Font Padding attribute and set it to False. That works great
for Latin scripts. You have that precise
control, but you will discover that
it starts cutting off those top and bottom parts of
the glyphs under index scripts and under a wide variety
of non-Latin scripts. So instead of using
that attribute, which is heavily discouraged, instead,
get the Font Metrics Object from your Text View. Then you can query different
attributes, different fields on that font metrics to get
where the descender line is and a couple of other
parameters, attributes. And then call Set Line
Spacing API in your Text View to increase or decrease
the line spacing based on the particular
requirement from your design. Finally, if you want to
tweak the vertical rhythm in between Text Views
that aspect vertically, you can set top
and bottom padding on the Text View once
again based on your design, and based on the specific
metrics of the font that is currently being
dynamically used under the particular locale. SHONA DUTTA: So as you
can tell, typography can be a tricky subject. However when it’s done well,
it can add a lot of polish to your app. Also remember whenever
possible, preview your designs on an actual device. So put the mockup on the device. That way you can tell whether
your text is readable. There’s really nothing like
actually seeing the screens on a device to figure out
when your designs are working or not. Let’s shift gears and talk
about shipping some assets. Earlier we talked
about the importance of working at mdpi when you’re
working on the overall design. This is still true
when working on assets. When making icons, nine patches,
and another graphical elements, we still want to
be working at mdpi. Like in this example, which has
an asset from material design at 24 by 24. By ensuring that the
asset is crisp and clean at the small size, when we
scale it up by 2 or 3 or 4x, we can ensure that it remains
just as crisp at the larger sizes. Going the other direction,
moving from large to small, it’s easy to run into icons
that have blurry edges and pour definition as
they’re scaled down. This is because vector points
that fell on whole pixels at the large size can start
to fall on partial pixels when scaled down. Here’s an example of an
icon that started large and got shrunk down
to a smaller size. You can see that it
has blurry edges. Remember that since we’re
talking about Android, we need to ship assets
at a variety of sizes to suit all screen densities. At Google we
typically use PNGs is for this because they
handle transparency very well while still
maintaining small file sizes. However one of the
great new features that we recently introduced
is backwards compatibility for handling vector assets. Previously vector assets were
only available to Android 5.0 and beyond. With the new availability
of vector drawables in older versions of Android,
now assets like this icon can be shipped as a
single vector asset and scaled dynamically
at run time. This is absolutely
fantastic for APK sizes because the size
of a vector asset is far smaller than the
aggregate size of all the PNG assets that one would need
for all the screen densities. But remember designers,
your vector assets will need to be
perfectly snapped to the pixel grid in order
to appear crisp and clean when they are rendered. KIRILL GROUCHNIKOV: So
going back to this variety, this continuum of screen
densities, once again, when I started it
was hdpi was the top. And now we have,
let’s say we’re not shipping ldpi assets
anymore anywhere, but you still have lower spec,
lower end devices at that mdpi. So if you go mdpi, hpdi,
single, double, triple x. And once again, who knows what
will happen in the future? Those PNG sizes start
adding up pretty quickly. And vector assets
are a great way to cut down on that extra
size introduced by the assets, by these PNG assets to your
overall APK binary size. And this backwards
compatibility support for vector drawables that was
added as part of the App Compat Support Library, when
we switched that support library itself to use vector
assets for a variety of search box icons in a couple of other
places, the overall binary size of that module of that
support library dropped by 9%. And 9%, if you multiply
it by however many devices your app happens to target,
or you hope to have, that’s a lot of bytes. Megabytes, hopefully
gigabytes, terabytes. This is how you
go about enabling backwards compatible
support for vector assets. If your app is targeting
pre-Lollipop devices, devices that ship with that pre-Android
5.0 release, this same way you have your vector drawable
in your Resources folder, you enable the support for
vector drawables in your Gradle file. And then automatically
your Image View that is under the hood
automatically converted to use the App Compats
version of the Image View has this built in support
for vector drawables as its sources. If you still use the
PNG assets, there’s great news for apps that
want to team or colorize a certain asset or
more than one asset based on the specific
colors that you’re using. Play Store is a great
example that is using this. In Play Store we have these
kind of color language for different parts of
our media offerings. We have light blue for
books, darker red for movies, orange for music, and so on. So instead of shipping
five or six sets of assets that have
the same shape, but are colored or tinted
differently, instead we can now ship only one base
set of assets, PNG assets, that are just white color. And then colorize them at run
time, tint them at run time, by wrapping the
original drawable and then calling this APIs
to set tint or set tint mode. And this works
great, by the way, with the overall iconography
language of material design that has simple shapes
and that has single color. And that lends itself
very well to representing those icons in vector format. SHONA DUTTA: Now that we’ve
got a robust spec and a set of assets to work with,
design’s work wraps up and implementation
begins, right? No, not the case. Remember, we don’t hand off. Instead we consider
design and implementation to be part of a
larger conversation. Since we’re all working
towards great experiences for our users, we make sure that
engine design are collaborating from the beginning, so
that when we ship a product it’s both beautiful
and functional. KIRILL GROUCHNIKOV: So I might
be dating myself a little bit. I remember the old days
off waterfall approach to developing,
designing applications. Those where bad days. So basically designers
would do their own thing as one big iteration. You wouldn’t even
call iteration, you would call it a phase. And then they hand over
the entire spec to you as a developer. And then you go
away and you work on however many weeks or months
that happened to be needed. And then this is done. This is not how it should
be going, especially in the world where we target
this variety of screen densities, screen aspect ratios,
screen resolutions, again, you should start with something
quick, low fidelity mocks that you implement and
put on this piece of glass in your hand so that
you can see what works and what doesn’t work. It doesn’t have to work
with the real back end. It doesn’t have to
work with real images. But it definitely informs your
next iteration much better then just looking at this pixel
perfect mock of your phone screen size, phone screen that
is on this gigantic 30 inch monitor that designers love. It’s not the same
level of feedback that you get from having
something on your device. And then your next
iteration, you grow progressively
from low fidelity to higher fidelity mocks. At every step, you know that
you’re making these informed decisions. I call this here a
corner cases, but they are anything but corner. You need to think what happens
when your implementation, when your app, when your flow
leaves in the real world. And the real world is full
off 2G or 3G connections. When you wait for
multiple seconds to see those
information bits that are not yet available
as they loading from the mobile network. You live in the
world where people are on metered
connections and so on. So you need to think about what
happens during that loading phase from the
design perspective, how the design scales to
those intermediary states, and how, from the
limitation perspective, you address those empty states,
intermediary loading states. As far as an empty state,
something like Inbox Zero, people love Inbox Zero. So what happens when you don’t
have anything in that inbox? What happens when you just
joined the new social network, and you don’t have
anything in your stream? Instead of discouraging the
user with this empty canvas, instead you can encourage
him or her with something a little bit more human,
a little bit more friendly like seen here in the
example on the left. In the example on
the right, you can see what we call tinting or
colorization of certain pieces of your UI, based on
predominant vibrant other colors from this hero graphic. But what happens if
this hero graphic is loaded from the network
when you can’t preload old possible albums, all
possible movies, and whatever you happen to have in your
app from the very beginning? You need to think about,
from the design perspective and from the implementation
perspective, what do you do with those
elements, with that FAB, with that progress indicator
where you can already play the song, but
you still don’t have the cover art for it. The same goes for
localizations, which can go a little bit longer
in German or Swedish. Or in scripts such as Korean or
Chinese, a little bit taller. So your design needs
to be aware of what happens when you’re
not running only on your particular
screen under English US. SHONA DUTTA: So keep in mind
that our designs are often pushing at the boundaries
of what devices are capable of handling. There will always be
technical limitations to what can be reasonably implemented. Navigating those limitations
is the responsibility of both designers and engineers. Like I mentioned
before, implementation is a conversation. And it’s important to get
input from the entire team. Everyone’s working towards
the same goal of creating awesome experiences. It’s when we collaborate
across design and engineering that we end up reaching
the most amazing solutions. And now we’ll hand
it over to John, here to talk about
motion design. [APPLAUSE] JOHN SCHLEMMER: So
thanks Shona, Kirill. I’m John, and I’m here to
talk to you about motion. So motion implementation
can be pretty different than visual
design implementation. But it’s still a really
important piece of the puzzle. So the material design
team just launched some brand new
extensive animation guidelines that can be
seen at design.google.com. And we’re definitely raising
the bar in animations this time. We have object moving
in curved paths instead of in straight
lines, to give the movement a little bit of a more
natural feeling, and less of robotic feeling. We have the width and height
of surfaces transforming asymmetrically to kind of
match this curve movement too. We have content within these
asymmetrically expanding services that follow
these curved paths. And the movement of both need to
stay consistent and contained. They are definitely right and
wrong ways to animate for sure. And that’s what these
new animation guidelines can help you. But that’s only half the battle. At the end of the day, this
needs to get implemented. And all too often,
I see animations being pretty different from
what the original design and after effects look like and
the final result in the app. And this is really
because of the skill level of either the designer
or the engineer. Sometimes it’s
around the designer making an animation
in After Effects that isn’t quite possible
to do on Android or iOS. But most of the
time, it’s actually about how that motion
is communicated or not communicated to the engineer. A designer can’t just hand
over an exported MP4 file and wipe your hands clean. There are so many subtleties
that go in a proper motion. And there’s no way
anybody else that sees it, designer or engineer,
will be able to quickly pick up on all these subtle details. It’s honestly just not nice
to make an engineer scrub through a video frame by frame
to try to point out and guess what’s going on. The communication
absolutely needs to continue from
that initial export. And proper communication with
that engineer during that, and as he or she
is building it is required to help move
it forward, and see if it’s even feasible on the
platform in the first place. So let’s take a look at this
animation in Google Calendar. It’s probably the most
seen animation in the app. So it’s pretty important
we get it right. We tap on an event, and
we go into the Event View, we go back, and it
collapses again. It’s pretty simple. Just expands and collapses. But there’s so
many details here. So let’s take a look at them. The width of the hard
actually transforms at a slightly offset rate
than the height of the card. The content inside of the
card moves in a strategic way to give you the illusion
that it lines up with that separate Title
field in the Header. The image inside of
the collapse event is actually a partial
reveal of the new view and it lines up with it. The exact timing of when
that old content leaves the view and the new
content comes into the view is also really important
to prevent any flashing or disturbances in the loading. The Edit FAB is
actually attached to the intersection
of the Header image and the white content below it. And the RSVP card in some
cases slides in from the bottom where it’s applicable. And that’s just
opening and event. What about closing it? It’s also not the same
animation reversed. And this is a really
common mistake I see a lot of designers
and engineers make. Simply reversing
this animation is not really enough to convey the
subtleties of motion that were introduced when
you were expanding it. The path maybe a
little different, and the time at which the
objects leave the screen are different from when
they enter the screen. And then you have to think
about which easing curve do you use for all this. The motion designer
might know these curves according to their
software, but it might mean completely different
things to the engineer. And outgoing easing
value in After Effects is actually referred
to an ease in value in another software languages. Well an incoming easing
value in After Effects is actually referred to an
ease out in other languages and software. So that’s already
pretty confusing. To make it worse, there’s
a right and wrong time to use each one of those. And mixing those up can have
pretty dramatic effects too. There’s a lot of
communication in what would be a seemingly simple
expand and collapse animation. And I really wanted to go over
these details with everybody just to show both the designers
and engineers everything that goes into it. Designers, you
should absolutely be considering every
detail when you’re making a transition like this. And I highly recommend
checking out the new animation guidelines in the material
and the material resources. Engineers, you should
be aware that designers are thinking about this. There are these little things
that go into animation. And you have every right to
be mad at them if all you get is an MP4 file emailed to you
with no further explanation. So how do we at Google actually
communicate this motion design? Every motion designer
has their own tricks. But I’d like to go over what I
found to be the most useful way to get detailed information
to the engineer. Just so they have access
to the right delays and easing curves and values
right from the beginning. We’ll be looking at
this animation in Inbox that you see the most. Opening an email. And here it is slower,
so you can actually see what’s going on. I show this before, but this is
what I have in After Effects. It’s a pretty complex
timeline with a bunch of hidden information
throughout the interface. Somebody not familiar
with After Effects would probably have no idea
what they’re looking at. So giving an engineer an
After Effects file probably isn’t the way to go either. What I do, along with a lot of
other motion designers here, is I create an animation
graph a little like this one. This is the animation graph I
made for the Inbox expanding animation that Mark and
I actually worked on. Yes, it’s still a timeline, but
it’s so much more digestible in this form. Instead of in
frames, it’s measured in milliseconds, which is
what engineers actually need to know. Each line on the graph is
actually an animated property of an on screen element. So opacity, scaling position,
x and y, all of that is located next to the
name of the elements that’s transitioning. The x-axis gives you an idea of
when that transition actually starts and for how
long it takes place. Easing curves are
also noted here. So since material design uses
a pretty common set of four different easing curves, naming
them here and identifying them here can let the engineer
cross reference them with the animation guidelines. And there it will
actually give you the exact interpolator or cubic
Bezier curve for that value. It’s a little more work
for the designer upfront to create something like
this for an engineer, but I promise you,
in the end it’ll take so much less time
than trying to figure out where an animation went wrong
and tweaking from an already incorrect one. So this helps me get animations
implemented with accuracy from the very beginning. And while handing off a
video with an animation graph is way better than just a
video, the communication still shouldn’t stop. A designer might have a
slightly tweaked design they need to communicate. Or an engineer might
have a roadblock that they run into as
they’re building it. There’s always going to be
edge cases to work through as Kirill mentioned too. There’s always going to be them. But I won’t get
too far into that. I want to let Mark Wei talk
about his experiences working with designers and
motion designers to implement this motion. [APPLAUSE] MARK WEI: Thanks John. I’m Mark, and I’m a software
engineer focused on motion. I’ve worked with John
on the Inbox app, and now on material design. Friends and engineers often
strive for pixel perfection. Motion engineers strive
for frame perfection. I’m here to talk to you
today about the pain points I’ve encountered while
trying to achieve this goal. As John mentioned,
engineers should not be expected to implement
motion from only in After Effects video. I’ve often found myself
scrubbing through video frame by frame, trying to pry
apart complex motion. Even worse, sometimes
there isn’t even a video. And I’m asked to implement
something on Android by seeing how it was done on iOS. Have you ever try to
figure out– thank you. Thank you. [APPLAUSE] Have you ever try to
figure out an easing curves by inspecting the frames? It’s not a very
good use of time. But fundamentally, a
video only shows a slice of the motion spec for
a very specific set of initial conditions. Engineers are always interested
in a more general case. So a richer motion
spec is necessary. This is where the animation
graph comes into play. But sometimes even the best
motion designers and engineers don’t speak the same language. Here are some caveats
to look out for. And keep in mind, these
are all actual examples that I found in apps both
inside and outside Google. Let’s take a look at
this expanding card. The animation graph might have
a component for width expansion and height expansion, but how
exactly does a card expand? Should you scale it
or should you mask it? The end results are
completely different. Try to use specific
language like scale and mask in your motion spec. Avoid generic terms like expand. Things get complicated when
rounded corners are involved. For material design, you may
not want to actually animate the radius of a rounded corner. Instead, think about breaking
it down into overlapping masks. It helps to include a
wireframe video showing the masks in your motion spec. This is especially true
for more complex motion with multiple components. Text scaling is really
hard to get right. Keep in mind that
an engineer may be constrained by her platform. For example, when you change
the font size of an Android Text View, you causes not only
an invalidation of that view but also a re-layout. This may cause
performance issues. A common workaround is to
just scale the entire view, or you could draw the text
manually onto the canvas. The collapsing toolbar in
the Android Design Library as shown, is a good example
of manually drawing the text to allow its font size
to change smoothly. As engineers know, it’s
the edge cases that are the hardest to deal with. What if the text
reflows the multi-line? What if the style changes
to italics or bold? Include text scaling
behavior in your motion spec. Keep edge cases in mind
and know that many times a simple crossfade
is acceptable. And on the topic of
performance, it’s something the designer
should also be aware of. On many platforms, animating
the alpha of the layer is actually quite expensive. For example, on Android,
you must be careful when animating the
alpha of a container like the toolbar in this video. Since its contents are being
translated independently, they must be redrawn
on every frame. This becomes a problem when
the container’s alpha is being animated, and may again
result in performance issues. In this case, the
engineer may choose to apply the alpha to each
individual icon rather than the entire container. If the designer is aware
of these limitations, the motion spec can be
designed around them. Motion specs are
usually concerned with going from state A to state
B, taking a certain duration. But in the real world, user
behavior is not so clear cut. They may want to cancel
out during an animation, or tap on an item that’s
still coming into view. And there’s nothing
a user hates more than having to wait for
transition to finish before continuing to
interact with a UI. It’s usually been left
up to the engineer to decide whether animations
can be interrupted. Going down the easy
path often means that UI elements become
frozen while an animation is in progress. Instead, consider clearly
defining in your motion spec the behavior of interruptions. I’ll leave you with
a personal anecdote. I recently implemented
the transition from a floating action
button to a sheet. What you’re seeing is a sheet
with the circular mask applied to it to look like the FAB. The sheet starts partially
offscreen and translates vertically until
it’s fully on screen. At the same time,
the circular mask expands from the
size of the FAB. The interesting thing is at the
end value of the circular mask expansion is defined
as fully covering the visible area of the sheet. Now some of you may have
realized that this is actually a moving target. As the sheet translates,
the visible area grows. So first I calculate at this
exact time how much of the card will be visible. And then I use that
to determine the n value that the circular
mask expansion should be. This seems to work, but actually
there’s something wrong. Now you’ll notice that
everything in the video actually looks fine. Unlike working
with static UI, you might not find any
obvious visual glitches when motion goes wrong. But when I slow
down my animation, I realize that the sheet was
fully revealed 25 milliseconds before it was intended to. The red highlight you see marks
the beginning of that moment. Now 25 milliseconds may
not sound like much, but that means that instead of
a circular expansion happening over six frames, it now happens
over a much more quicker pace, four frames. When played back at full speed,
it felt ever so slightly off. So I triple check
all my numbers, but everything looked fine. While I was struggling
to find an explanation, I hit a sudden inspiration,
and I scribbled this graph down in my notebook. The longer line describes
how the vertical translation of the sheet shows
more area over time. The shorter line describes how
the circular mask expansion covers more area over time. The curvature of
the two lines are caused by the standard
easing curve applied to both. The last intersection
between these two lines show where we want the
two areas to coincide, satisfying the motion spec. But we can clearly
see that there was an unintended intersection. That represents an earlier time
when the circular mask already covers the entire card, which
the user perceives as a shorter duration. My motion designer did not
encountered this issue in After Effects, but I did
in my implementation. Why is that? It’s because there
exists a certain set of initial conditions. The size of the sheet. The size of the FAB. The distance between
them and the edges of the screen that resulted
in this particular situation. In the end, I communicated
this to my motion designer, and we decided to
change the easing curve of the circular
mask to avoid this. Is this guaranteed to
work for the entire set of possible initial conditions? It’s hard to say. Whenever you have
animations that are defined to be
dependent on one another, you may run into
problems like this. And they will be difficult to
notice and difficult to fix. Great. I was able to resolve this
issue with a simple easing curve change. But this just shows the
importance of communication between designer and developer. Some motion specs are ambiguous. Others are difficult due
to platform limitations. But a few are just plain
impossible because of reality. Having an open
communication will ensure that your intent
behind the motion spec is properly implemented. And if you keep these caveats
in mind, you and your engineer can start to speak
the same language. [APPLAUSE] SHONA DUTTA: So that
wraps up our session. If you’re interested
in learning more, please check out some of
the links on the slide or join us online for
continuing the conversation. Thank you so much for attending. [APPLAUSE]


3 thoughts on “Designer & developer communication – Google I/O 2016

  1. Really enjoyed the video.
    31:38, is there a good way to export the exact easing curve from After Effects to cubic-bezier values?

    This was an interesting related read.
    https://medium.com/@ryan_brownhill/after-effects-to-css-79225c1d767e#.rh49uufs2

  2. @johnschlemmer thanks for sharing your workflow. Please tell me some of the heartache there is taken out with Pixate, which I see Google just bought. I'm an independent making the switch to Material and really hoping to avoid the kind of work you describe as necessary.

Leave a Reply

Your email address will not be published. Required fields are marked *