10 Basic Mistakes in Digital Painting and How to Fix Them


Digital painting is quite tricky. You get the proper software and that’s it, you can start painting. Every tool, even the most powerful one, is within your reach. All the colors are ready to be used, no mixing needed. If you’ve just converted to Photoshop from traditional painting, it’s not that hard; you only need to find the counterparts of your favorite tools. But if you’re a beginner in both kinds of art, the start is a nightmare—the one in which you’re not aware you’re dreaming!The trickiness of Photoshop is based on its apparent simplicity: here’s the set of brushes, here are all the colors, here’s the eraser, and this is the Undo command. You start painting, it looks bad, so you search for other tools to make it better. And just look how many tools there are! You try them all, one by one, and magic happens.

However, “magic” means that you’re letting Photoshop paint for you. You don’t have any control over this, but it still looks better than you, a beginner, would ever do (at least, that’s what you believe). So you let this happen and produce lots of pictures in the hope that they’ll turn into masterpieces one day.

Professional digital artists you admire use Photoshop to bring their visions to life, but they use it as a tool, not an art producing machine. What’s the difference?

  • Professionals imagine an effect and make the program do it.
  • Beginners make the program do something and if they’re satisfied, they keep the effect.

Does the second option sound familiar? If so, keep reading. In this article I’ll show you how to improve ten different aspects of your workflow so that you become an aware Photoshop artist. With these ten simple tips you’ll understand the mistakes that might have been blocking you for a long time!

Note: the problems I’m describing apply to a situation when the artist achieves the effect unintentionally, while going for realistic style. These aren’t mistakes if you plan them!

Starting a new picture is child’s play. You go to File > New, or, if you’re more advanced, use the shortcutControl-N. It looks so simple that it’s often overlooked.

digital painting what resolution to use width height

There are three aspects to this issue.

Just as all objects are made of atoms, every digital painting is made of pixels. This you probably know. But exactly how many pixels do you need to create a detailed painting? 200×200? 400×1000? 9999×9999?

A common beginner mistake is to use a canvas size similar to the resolution of your screen. The problem is you never know what resolution your viewers use!

Let’s say that your picture looks on your screen like in the example 1. It’s as high as it could be without a need to scroll over it, and that’s OK for you. That’s the biggest you can get on your resolution, 1024×600. Users with resolution 1280×720 (2) and 1366×768 (3) couldn’t complain either. But notice what happens for users with bigger resolutions—1920×1080 (4) and 1920×1200 (5). Progressively, the picture takes less and less space on the screen. For these users, you didn’t really use all the height you could!

digital painting relative size resolution

And it’s not just the matter of “white space” around your picture. “Bigger resolution” doesn’t always mean “bigger screen”. Your smartphone may have more pixels on its compact screen than you have on your PC! Just look:

  1. Same size, different resolution
  2. Different size, same resolution
digital painting size resolution difference

What does it mean? For viewers other than you, the picture that you planned to fill the whole screen may look like… this:

digital painting size resolution difference simulation

But the size you choose for your picture isn’t only about this. The bigger resolution, the more pixels it has. In smaller resolution an eye may take 20 pixels, while in a bigger resolution it may have 20,000 pixels all for itself! Imagine all the neat details you can put in such a big area!

Here’s a cool trick: when you paint something small in a big resolution, no matter how sloppy it is, there’s a good chance that from a distance (i.e. zoomed out) it will look interesting and intentional. Try it out!

digital painting why use bigger resolution

A big resolution lets you zoom into the tiniest details

Does this mean you should always use a huge resolution to give yourself more freedom? Theoretically, yes. In practice, it’s not always necessary, and sometimes it’s impossible.

The bigger the resolution, the more pixels your basic stroke has. The more pixels in the stroke, the harder it becomes for your machine to process it, especially when it comes to pressure levels with variable Flow. So, that’s a practical argument against it—you need a powerful computer to use big resolutions comfortably.

digital painting big resolution good computer

The second point is big resolutions are meant for highly detailed pieces. Contrary to popular beginner belief, every painting shouldn’t be detailed. Even when you want to paint realistically, you can safely ignore a whole lot of information that you’d get from a photo—what we see in reality never looks like a photograph!

When you use a bigger resolution than necessary, it may be tempting to add some details here and there, just because you can. And when you do it, there’s no way back. There are many levels of details, but a particular piece must use only one at a time. If you want to create fast, painterly fur, don’t spend hours on the eye and nose—it’ll only make the whole piece look inconsistent and unfinished.

Let’s say you found that perfect resolution for your painting. It’s neither too big, nor too small—perfect for the level of details you want to achieve. Here’s some space for another mistake. That resolution was your working size. Maybe you needed many pixels to get to that eye detail, but that detail will be visible “from distance”, too.

digital painting what final size resolution to use

Why let them see these nasty details…
digital painting change resolution details

… if you can make them see only what’s supposed to be seen?

Before saving the file for the final view, resize it. I won’t tell you the most optimal resolution, because there’s none. The rule is: the more detailed the piece, the less it’ll lose when presented in big resolution. The more “sketchy” the painting, the better it’ll look when presented in small version. If you want to learn more about this, see what resolution your favorite artists use when posting their works online.

One more thing: when resizing the picture, check which resample algorithm works the best for you. Some of them sharpen the image, which may or may not be desired.

digital painting how to change size

It may seem trivial—what’s wrong with a white background? It’s… neutral, right? It looks just like a sheet of paper.

The problem is there’s no “neutral” color. “Transparent” is the closest to it, but it’s impossible to paint it. Color is color. When two of them occur, a certain relation appears on its own. For white + color A, this relation is: “color A is dark”. No matter what your intention was, you start with a dark color only because you used the brightest background possible! Everything is dark in comparison to it.

digital painting what background color choose start

The same shade changes its relative brightness when seen on different backgrounds

In traditional art we use a white background, because technically it’s easier to put dark on bright than the other way around. But there’s no reason for it in digital art! In fact, you could even start with a black background, but it’s just as bad an idea as pure white. In practice, the most “neutral” color we can get is 50% bright gray (#808080).

Why? The color of the background changes the way you perceive other colors. On a white background, dark shades will appear overly dark, so you’ll be avoiding them. On a black background it’ll work the same with bright shades. The result in both cases will be weak contrast that you notice once you try to add some other background. Here’s the proof:

digital painting black white background wrong

I’ve started this picture with too dark a background, and when I wanted to add a brighter one, it looked blinding!

Experienced artists are able to start their picture with any color and make the best of it, but unless you feel confident about color theory, always start with something neutral—neither too dark, nor too bright.

digital painting gray background good

Of course, sometimes your perception of bright and dark may be disturbed by the poor quality of your screen. If you use a laptop, you’ve probably noticed how the contrast changes when you look at your picture from an angle. How can you achieve a proper contrast that everyone will see as proper, no matter what screens they use?

Even if your screen is good, after focusing on a picture for a long time your perception isn’t undisturbed either. If you change shades slightly step by step, that contrast may seem good only because it’s better than five steps before. The object below may look nice…

digital painting why weak contrast

…. until you compare it to an object with a stronger contrast. And who knows, maybe if you compared the other one to another, its perceived contrast would automatically drop?

digital painting how to fix contrast photoshop

Photoshop has a tool that helps in this situation. It’s called Levels, and it’s actually a histogram—it shows you how much of every shade there is in the picture. You can open this window by going to Image > Adjustments > Levels or by using the shortcut Control-L.

digital painting levels histogram how

How does it work? Take a look at these four samples:

  • Almost equal amount of white, black, and midtones
  • Black and dark midtones only
  • White and light midtones only
  • Black and white only, almost no midtones

Can you read it from the histogram?

digital painting photoshop levels how it works

You can modify the levels by dragging the sliders. While it reduces the general amount of shades, it helps to place them in the correct place in the histogram.

digital painting photoshop levels how to change increase contrast

The histogram shows us there are a lot of midtones in our object, but there’s also a visible lack of dark and bright areas. No matter what we see, that’s what the computer says! While there is no one perfect recipe for levels (it all depends on the lighting of your scene), a total lack of dark and bright areas is a bad sign.

Just see what happens when we move the sliders towards the middle!

digital painting histogram too little levels

The contrast has changed nicely, but blending has suffered from it—it’s because we have fewer midtones now. But it’s not hard to fix it manually!

Is there a way to use proper shades from the beginning, so that all of this isn’t necessary? Yes, and it’s actually going to take you less time than usually! The solution is to use fewer shades—a dark one, a bright one, a midtone, and a bit of black and white.

digital painting what shades to use

To do it in practice, before starting a picture, plan your lighting on a sphere:

  • Draw a circle and fill it with the darkest shade (black not recommended).
  • Add a midtone.
  • Add the brightest shade (white not recommended).
  • Add one or two midtones in between.
  • Add a pinch of black and white.
digital painting how to shade light shadow midtone

Do you see where all these parts lie on the histogram? When we merge them, this is what appears. Use this sphere as a set of swatches for your object, shading it the same way—darkest shadow, midtone, brightest light, more midtones, darkest crevices and brightest highlights. Then you can blend it all!

digital painting how to create midtones

If you blend the sphere, you’ll get a really nice histogram with a lot of midtones!

One last thing. If you compare these two heads once again, one with proper contrast from the beginning, and a corrected one, you’ll notice the difference. That’s why increasing contrast can’t really fix your scene if you haven’t considered values from the start—every material has its own range of shades. For example, the darkest part of a white surface will be much brighter than the darkest part of a black surface. It means you should prepare as many spheres as there are different materials in your scene.

Remember: a light object shaded with dark shades is as wrong as a dark object shaded with light shades!

digital painting how to fix weak contrast luminosity value

When you compare traditional brushes to Photoshop ones, the difference is so striking that you may wonder why they share the same name. After all, classic brushes let you paint only more or less chaotic strokes, while digital ones create artworks on their own.

And here’s the catch. If they create anything on their own, you give up your control. Professional artists use simple strokes most of the time, supporting themselves with more complicated ones from time to time. Using complex brushes on a daily basis not only makes you lazy—it also stops you from learning how to achieve an effect on your own, because there’s no need to!

digital painting fur custom brush photoshop

When starting your adventure with digital painting, it’s natural to search for ways to progress as quickly as possible. You want results, here and now. Custom brushes are the answer for it. You want fur—here’s the fur brush; you want scales—here’s the scale brush. You can’t paint something—download a tool that can.

Custom brushes aren’t bad—they’re actually very useful. The problem occurs when you use them as a base of your “skill”. If you actually spent time trying to understand how to paint fur quickly, you’d understand it’s not really made of single hairs that you need to paint one by one. You’d learn that the vision of something in our minds is often inconsistent with reality. You’d learn how to look and how to create what you see, not what you think you see.

Instead, you give up after spending half an hour on painting single hairs and search for a brush that will do it for you. You find it, you’re happy, you’re ready to move on. It’s so easy that it becomes addictive, and you stop learning at all—you’d rather download all the tricks, if it were possible.

But how do traditional artists overcome it? They don’t really have such a variety of brushes. How do they paint fur? The answer is simple—the same way you would if custom brushes didn’t exist. If you want to improve, if you want to beat this beginner’s curse, discard complicated bushes for a while. Stick to a simple set, for example this one, and learn how to make the best of it. Don’t look for shortcuts when it gets difficult. Fight through it, and you’ll get priceless experience instead of cheap tricks.

digital painting why custom brush bad

No “fur brush” was used here

Another common mistake linked to brushes is using too large strokes. It’s, again, based on impatience. The rule is that 80% of the work takes 20% of the time spent on it, which means you need to spend 80% of the working time finishing your artwork. If you created the sketch, the base, flat colors, and simple shading in two hours, there are still eight hours left. Moreover, during these eight hours you’ll see less progress than during the first two! How discouraging is that?

You can clearly see it in the process pieces artists sometimes show to their public, like this. The first steps are huge—something from nothing is created. And then it slows down. You can barely see any difference between the last few steps, and yet that subtle difference probably took more time than all the previous ones!

digital painting when to finish

At which point would you stop?

This is the problem. When your picture is almost finished, you feel that urge to finish it and see it done already. But this is the moment when you should actually start! I remember reading a comment under one of these painting processes: “I would stop at Step 4” [out of 10]. And that is the difference between that beginner and the professional! Because the other part of that rule is: that last, slow and subtle 20% of the work contributes to 80% of the final effect.

The solution is simple. You should never finish your picture with big strokes (unless it’s a speedpainting). They are reserved for the beginning, for that 20% of time. Use them to block the shapes, to define lighting, to add big chunks of colors. And then gradually go smaller, zoom in, clean up, add the details. You’ll see you’re finishing when you’re working with a very small brush in a very big closeup. Generally, the more places in your picture you touch with your brush (and the more you change it, e.g. adding a slight difference in brightness or hue with every little stroke), the more refined it will look.

There’s a bright side to this rule. Since 80% of the piece doesn’t contribute that much to the final effect, there’s no need to overly focus on it. Start your paintings quickly, loosely, and save all your efforts for later. Remember: not every painting must be finished just because you started it. By discarding something you don’t believe in, you’re saving yourself four times as much time as you’ve already wasted!

Traditional artists don’t have too many colors ready to use. They must learn how to create them, how to mix them to get the effect they want. This inconvenience is, in fact, a blessing. They have no choice but to learn about color theory. You, as a digital beginner, have all the colors possible within your reach. And that’s a curse!

digital painting how to choose colors photoshop

We don’t understand colors intuitively. There’s no need to in our daily lives. But as an artist, you must completely change your perspective. You can’t rely on intuition any more, because it works poorly in this topic. You need to stop thinking about colors as you know them, and grasp the concept of Hue, Saturation, Brightness, and Value.

Colors don’t exist on their own. They’re based on relations. For example, if you want to make a color brighter, you can either increase its brightness, or decrease the brightness of the background. Red, called a warm color, becomes warm or cool depending on what its neighbor is. Even saturation changes because of relations!

digital painting colors relative

Even the hue of a color can change depending on its environment. And this knowledge is crucial for painting, not only to design, as you may think!

A beginner unaware of all this fills their sketch with a whole set of random colors. They pick a blue, then add a green, without any clue they’re picking them from thousands of other colors with bluish and greenish hue, and that they all have some kind of power!

This is how a beginner sees the colors:

  1. Blues
  2. Muddy blues
  3. Grays
  4. Blacks
digital painting how color picker works photoshop wrong

But… why do we even have so many shades, if they’re useless? The answer is, they’re not. You only need to understand where they come from and what they mean. Let’s see the same color picker as seen by a professional:

  1. Desaturated blues
  2. Saturated blues
  3. Bright blues
  4. Dark blues
digital painting how color picker works photoshop correct

Looks complicated? Probably, but it’s not a reason to ignore it! If it’s too overwhelming, stick to grayscale for a while. Understand the issue of lighting, shading, and blending, and you’ll get a solid base for the future. More, colors (or rather: hues) are an icing on that cake called your artwork. They can make it sweeter, but they can’t be the base of it. No amount of icing makes a bad cake good!

And if you decide to start your color course, try these articles for a start:

It’s hard to fight this temptation, so hard that it may make you cry. I know it well. Still, if you want to really learn digital painting, you mustn’t use the Eyedropper Tool to borrow a color from a reference. Why is it so important?

digital painting how to choose colors eyedropper tool

Beginners usually use low saturated orange/pink for skin. It seems obvious, but the effect is far from natural. If you use a reference, though… that’s a different story! Almost every pixel has a different shade, not only pink—you can find reds, yellows, oranges, even cold purples, greens, and blues. Saturation and brightness change all the time, and still, the final effect doesn’t resemble chaos.

When you pick colors from a reference, your own picture gains new life. The problem is it’s not that different from tracing. In the result of tracing you get lines that you couldn’t repeat yourself, and when picking colors you get beautiful shading you couldn’t repeat either. The effect is amazing—but you can’t be credited for it.

There’s another thing. Picking the colors stops your progress. You “buy” the color sets instead of learning how to create them yourself. You’ve got your color wheel with all the sliders; every color you pick can be recreated by you, on your own. But you still decide to use the ones that have already been created—it’s fast and effective, but you know what’s even faster and more effective? Taking a photo.

To become independent from references one day, you need to learn to see the colors. Look at any object near you—what’s its hue, brightness, saturation? It’s extremely hard to tell, isn’t it? But if you keep picking the colors using the Eyedropper Tool, you’ll never learn it. You can’t start a race with your training wheels still on.

digital painting how to practice colors

All these studies were painted by me from a reference, but without using the Eyedropper Tool. As a beginner, start with simpler things—the fewer colors the better

I painted this picture in 2011. This is certainly a nice, heart-warming scene, and I still quite like it. I remember I painted it in grayscale and then added colors using probably a few Blend Modes (Color, Multiply, Overlay). I remember having this one annoying problem: how to get a proper yellow when painting over grayscale?

digital painting grayscale to color why muddy colors dark yellow

I don’t have the original file any more, but this is how the grayscale version looked, probably. Notice that both green and yellow areas are equally dark. This is not true in reality!

digital painting how from grayscale to color problem

When I was a beginner like you, I believed that light makes all the colors uniformly brighter, and shadow makes them uniformly darker. That’s why painting in grayscale seemed so convenient. I could have focused on shading, and added colors later. Unfortunately, this trick didn’t work, and it took me a long time (mainly because I didn’t really try) to understand why.

The simple answer is different colors have different brightness independent from lighting. When you ignore it, you get muddy colors as a result. Colors you add directly to a picture in grayscale are devoid of an important part of them. If you want to learn more about it, I’ve written a complex tutorial on the topic of value.

digital painting grayscale color problem value luminosity
digital painting muddy colors contrast color mode

Both heads have the same colors applied with Color mode. Notice it’s not the coloring layer, but what’s beneath that matters

The Dodge and Burn Tools are one of the beginner’s favorites. They fit nicely the belief that Photoshop is a “painting program”. You only need to choose the base colors, and then select the areas you want to shade. The rest is controlled by advanced algorithms, not by you, and that’s great, because you wouldn’t know what to do anyway.

digital painting how to shade dodge burn

However, it’s not that easy. These tools aren’t completely useless, but when you’re a beginner, it’s better to stay away from them. They’re not “shading tools”. The Dodge Tool isn’t equal to “add light”, and the Burn Tool isn’t equal to “add shadow”, either. The problem is they fit a beginner’s vision about shading so neatly that it’s hard to overcome the temptation.

The problem doesn’t lie in the tools, but in a misunderstanding of shading itself. Beginners often think that objects have some color, and this color gets darker in the shadows, and brighter in the light. It’s not that simple. It may work well for cell-shading, or for cartoon purposes, but even there this is just a shortcut.

If it “kind of” works, why not just use it?

  • It’s another technique that blocks your progress. When using it, you don’t even notice what you’re missing. Shading is a complex issue, and you limit it to the “darken-lighten” rule, because it’s easy. Photoshop is here to work for you, not instead of you. Don’t let it stop you from learning.
  • It flattens the shaded object. No matter how many textures you add to it afterwards, it works just like a big brush—which means you can start a picture with it, but never finish it this way.
  • It misrepresents the colors; they should be dependent upon the environment (direct light, ambient light), but neither the Dodge Tool nor the Burn Tool knows anything about the background you’re using. They shade everything the same way!
digital painting how to shade dodge burn comparison

There’s a one year difference between these pieces. The first one was shaded with the Dodge & Burn duo, the other one with an understanding of color.

The extension of this technique is shading by using white for light and black for shadows. This comes from a belief that every color starts as black (in the shadow) and ends as white (in the light). While it may be true for over- or underexposed photographs, it shouldn’t be a rule used in painting.

digital painting how to shade black white wrong

We all search for simple rules, something easy to remember and use. But it doesn’t mean we should inventsimple rules that have nothing to do with reality, like: “add white to make it brighter, add black to make it darker”. This one is true only for grayscale! If you want to learn what rules you can use to shade with colors, see this article.

When you overcome the previous problem, you may fall into another one. Let’s say you’ve chosen orange as the base color for your character. You decide that the light source will be yellow, and the ambient light will be blue (sky). According to this, you shift the hue of the base color to yellow in the light and blue in the shadow, and that’s all. It makes shading much more interesting than if you used black and white, but it’s still a shortcut that doesn’t necessarily lead to the desired results.

digital painting how to shade boring shading cold shadow warm light

Why is it a shortcut? By creating only three basic shades for your object, you automatically place it in an artificial environment, where everything is reflecting light in a 100% predictable way.

In reality, light continuously bounces off everything around, including the “hills and valleys” of your 3D object. Therefore, shading rarely can be limited to two or three colors. The blue of the sky can make some shadows on the object blue, but in some other crevices they may look greenish because of the light reflected from the grass. Moreover, some shadows may be actually bright and saturated because of light that came through the obstacle into the “shadow” (see subsurface scattering).

If you take this into account and decide to use indirect light sources to make the shading more varied, you’ll be forced to paint more deliberately—and that’s great! You can’t use a big brush here, because it would mix the colors and you’d lose them as a result. And a small brush means you’re creating a texture on the fly!

digital painting how to shade properly interesting shading color various
digital painting how to shade shading way comparison

There are two main methods beginners use to blend shades, both designed for quick effects:

  1. Blending with a soft brush
  2. Blending with the Smudge/Blur Tool
digital painting how not to blend smudge blur soft

As we’ve already learned, quick effects often mean they’re out of our control. In this case, blending with big strokes flattens the object and makes it unnaturally smooth. Even if you add a photo texture afterwards, it will not take that “plastic” effect away. Again, this method may be good only for that starting phase.

digital painting bad blending soft

If you want to get a nice subtle texture (which is good for most of the natural materials), use a harder brush with Flow controlled by pen pressure (the harder you push, the more solid the stroke).

digital painting how to variable flow pressure opacity

This kind of brush lets you control the amount of color you want to use.

digital paintinvariable flow brush how it work

Thanks to this attribute, you don’t need to blend the border between two colors. You just start with the base color and cover it lightly with the other one. Then you can add another layer of the same color, making it more solid.

digital painting how to blend with thexture flow

If you want to make it smoother, just pick a color somewhere in between and paint lightly over the border.

digital painting how to blend smooth texture

For a stronger texture, use a textured brush (with rough edges).

digital painting how to blend smooth texture example

According to the 80-20 rule, don’t bother about blending in the first phase. Use a big brush, and make the edges apparent, creating an exaggerated shading.

digital painting how to shade before blending

Later you can use a smaller, textured brush to blend the edges. No Smudge Tool, no soft brush, just theEyedropper Tool and your hard brush with variable Flow. Remember: blending depends on the texture of the surface, so you can’t use the same way of blending for every material!

digital painting how to blend after shading

Photo textures are a beginner’s last resort, when the object is theoretically finished, colored and shaded, but it still looks like a plastic toy. However, a texture itself may make it even worse.

Let’s say we want to add a texture to this big cat.

digital painting how to apply texture flat

The object must be shaded before adding textures. The tricky part is it doesn’t need to be a complete shading. The method of blending depends on the texture—if you blend without having any texture in mind, you’ll get a non-texture blending (smooth surface).

digital painting how to apply texture 3d
digital painting how to apply texture 3d example

You can download a texture from the Internet, or use one of the Photoshop patterns—there are lots of them in the default sets. This is my favorite pattern for scales, inverted Screen Door.

digital painting how to apply texture too flat pattern scales
digital painting how to apply texture too flat pattern scales example

If you change the Blend Mode of the texture to Overlay, you’ll see it applied to the shading. However, notice that some parts have been brightened. You may like it if shading wasn’t done properly, but it’s another case of giving up control. In most cases, we don’t want the texture to create its own version of shading. While the Overlay mode isn’t the best solution, it lets you see how the texture looks on the object.

Now, the most important and overlooked part. If the object is meant to be 3D, it can’t be nicely covered with a 2D texture. We need to adjust its shape to the form it covers. There are three main ways to do this—experiment and find your favorite:

  • The Free Transform Tool (Control-T) in the Warp mode
  • Filter > Liquify
  • Edit > Puppet Warp
digital painting how to apply texture shape form change

For spheres it’s better to use Filter > Distort > Spherize
digital painting how to apply texture shape form change example before

Before using Puppet Warp
digital painting how to apply texture shape form change example after

After using Puppet Warp

The Overlay mode brightens the parts of the layer covered with white areas of the texture. We could use Multiply instead (it makes white areas transparent), but it would make gradual tones (grays) darker than necessary. There’s another tool perfect for adjusting a texture’s transparency.

Double click the layer and play with the Blend If sliders. In brief, you can adjust the transparency of white and black with them.

digital painting how to apply texture opacity blend if sliders

Hold Alt to “break” a slider and get a more gradual effect
digital painting how to apply texture opacity blend if sliders white transparent
digital painting how to apply texture opacity blend if sliders example

We need to understand what a texture really is. It’s not a “rough pattern” placed right on the object. It is, actually, the roughness of a surface. When light hits a smooth surface, it’s reflected uniformly. If the surface is rough, i.e. made of tiny bumps and crevices, the light hitting it will create a whole set of tiny shadows. And that’s the texture we see.

Another fact can be derived from this. If it’s light that creates visible texture, there can be no texturewithout light. And what is shadow, if not lack of light? Therefore, we need to reduce the texture in dark areas (if there’s ambient light present), or remove it at all (no light, no texture). You can use the Layer Maskfor it, or play with the Blend If sliders (the second row). Keep in mind that the crevices of texture are in fact shadows, so they shouldn’t be darker than the “normal” shadow area.

digital painting how to apply texture dark transparent
digital painting how to apply texture dark transparent example

Applying a texture is fast and easy once you know what actions to take after choosing one. However, this is not the end. Every texture is different, and while some of them will look great when applied directly, most require some more work.

Again, the 80-20 rule applies here. Adding a texture is easy, but making it work is what takes the most time. In my example I’ve blended the shading edges with individual scales. Things like this are very time consuming, but they change everything!

digital painting how to shade texture photoshop
digital painting how to shade texture photoshop example
digital painting how to shade texture photoshop comparison

The first sphere has a flat texture in Overlay mode with lower Opacity, to make it less apparent; the second one is the same, but with distortion applied. Compare them to the last one, with custom Blend If values and manually adjusted blending

As we noticed, most of the beginner problems come from that urge to achieve great things effortlessly, as quickly as possible. So it’s not really about lack of skill—rather, it’s about that deep belief that Photoshop is an art-making program. That leads to a constant search for tools and tricks, instead of making the effort to understand and solve the problem.

You can’t become a digital artist in one day, only because you’ve purchased an advanced piece of graphics software. Photoshop is just a tool—more convenient to use than pigments and brushes, but it’s still a tool. It can’t do more than you intend it to! If you want to make the best of it, treat it as a digital canvas with digital colors. Forget about all these fancy tools, filters, brushes, or Blend Modes. Just paint, as you would on a real canvas.

Learn color theory, perspective, anatomy—all these things that “normal” artists must learn. With time you’ll understand how to use Photoshop’s tools to do the same things easier and faster—but don’t put the cart before the horse by trying to get great effects without understanding them. Patience is the key!

And although this article has been about developing skills rather than relying on technological solutions, you can still find some useful Photoshop resources over on Envato Market.

Creating Specular Maps….

* First make a normal map.
* Find some texture for details, like scratches or dots, clone it all over the map in Photoshop…
* Import normal map, filter/stylize/find edges, hue it down, invert it, adjust levels and import it in specular map in screen mode. will make your normal map look way better.
* Use defuse map, desaturate it, put it on top on everything with opacity around 60.
* Sharpen everything

Awesome MAYA scripts

* Change the specific attribute of all the phong nodes in the Maya scene:

    string $m;
    string $ms[] = `ls -type "phong"`;
    for ($m in $ms)
        setAttr ($m + ".reflectivity") 0;
        setAttr ($m + ".specularColor") -type double3 0 0 0;
        setAttr ($m + ".cosinePower") 2;
        setAttr ($m + ".translucenceDepth") 0;
        setAttr ($m + ".translucenceFocus") 0;

* To all textures set “alphaIsLuminance” 0

    string $m;
    string $ms[] = `ls -tex`;
    for ($m in $ms)
        setAttr ($m + ".alphaIsLuminance") 0;


* Clean Up _Pasted :

## This script requires pymel

## This script is a quick and simple way to clean up a scene after
## copy and pasting in Maya. It searches for objects with the name
## starting with pasted__, ungroups them if they came in a group,
## and renames them. Pymel automatically renames them uniquely if
## needed.

## Import pymel
from pymel.core import *

## Stores a list of all objects starting with pasted__
pastedObjects = ls('pasted__*')

## For each thing in the list, store it as object
for object in pastedObjects:
    ## Get a list of it's direct parent
    rel = listRelatives(object, p=True)
    ## If that parent starts with group
    if 'group*' in rel:
        ## It came in root level and is pasted in a group, so ungroup it
    ## The new name is the objects name without pasted__
    newName = object.replace('pasted__', '')
    ## Rename the object with the new name
    rename(object, newName)

* BVH Import

/*  This file downloaded from Highend3d.com
''  Highend3d.com File Information:
''    Script Name: BVH Import v2.0
''    Author: Sergiy
''    Last Updated: August 4, 2002
''    Update/Change this file at:
''    http://www.highend3d.com/maya/mel/?section=utilities#1840
''  Please do not alter any information above this line
''  it is generated dynamically by Highend3d.com and will
''  be changed automatically on any updates.

BVH file import. Builds a skeleton under bvh_import group and imports the motion.
Sets up zero initial position at the frame -20. Works with Maya 4.0.
Just run the script and pick a bvh file.

NOTE: New Curve Default option (uder Preferences/Settings/Keys) MUST be "Independent Euler-Angle Curves"

Written by Sergiy Migdalskiy <migdalskiy(at)hotmail.com>, comments and suggestions are welcome.

Originally based on the bvh_import.mel script by sung joo, Kang (Gangs) / sjkang(at)bigfilm.co.kr, gangs2(at)nownuri.net

string $filename=`fileDialog -dm "*.bvh"`;

$fileId=`fopen $filename "r"`;

select -cl;
int $joint_name_val = 0;
string $joint_name[];
float $offset_joint_x[], $offset_joint_y[], $offset_joint_z[];
string $index_joint[];
int $index = 0;
int $index_ch = 0;
string $index_channel[];
string $make_joint_cmd;
string $ch_tmp;
float $frame_interval;
string $temp_buff[];
string $name, $name_temp;

clear $joint_name $offset_joint_x $offset_joint_y $offset_joint_z $index_joint $index_channel;

$lbAuto=`autoKeyframe -q -st`;		//Get auto keyframing status.
autoKeyframe -st false;			//Turn off automatic keyframing.

$name = `group -em -n bvh_import`;
tokenize $name "bvh_import" $temp_buff;
if (size($temp_buff) == 0 ) {
	$name_temp = "";
else {
	$name_temp = $temp_buff[0];

string $nextWord = `fgetword $fileId`;

float $offsetx = 0;
float $offsety = 0;
float $offsetz = 0;

int $frames;
float $time_count = 0;

string $last_joint_name_val = "";

proc float Turn180Degrees (float $fAngle)
float $gPi = 3.1415926535897932384626433832795;
	return $fAngle >= 0 ? $fAngle - $gPi : $fAngle + $gPi;

// mirrors the rotation: returns pi-fAngle
// normalized to the given normalized [-pi,pi) angle
proc float Mirror180Degrees (float $fAngle)
float $gPi = 3.1415926535897932384626433832795;
	return $fAngle > 0 ? $gPi - $fAngle: -$gPi - $fAngle;

proc setRotation (string $strObject, float $fRotate[])
	float $fRotateX, $fRotateY, $fRotateZ;

	$fRotateX = $fRotate[0];
	$fRotateY = $fRotate[1];
	$fRotateZ = $fRotate[2];

	rotate -r -os 0 0 $fRotateZ $strObject;
	rotate -r -os $fRotateX 0 0 $strObject;
	rotate -r -os 0 $fRotateY 0 $strObject;
	setKeyframe -at "rotateX" $strObject;
	setKeyframe -at "rotateY" $strObject;
	setKeyframe -at "rotateZ" $strObject;

while (  size($nextWord) >0 )	{

		if ($nextWord == "ROOT")	{
				$jointname = `fgetword $fileId`;
				$joint_name[0] = $jointname+$name_temp;
				$index_joint[$index] = $jointname+$name_temp;
				joint -n $joint_name[0] -p 0 0 0;


		if (($nextWord=="JOINT") || ($nextWord=="End"))    {

		// find Joint name

				$jointname = `fgetword $fileId`;
				$joint_name[$joint_name_val] = $jointname+$name_temp;
				$index_joint[$index] = $jointname+$name_temp;

		if ($nextWord == "{")	{

			$nextWord = `fgetword $fileId`;
			if ($nextWord == "OFFSET" )
				// find Joint offset data
				float $offset_x=`fgetword $fileId`;
				float $offset_y=`fgetword $fileId`;
				float $offset_z=`fgetword $fileId`;
				$offset_joint_x[$joint_name_val] = $offset_x;
				$offset_joint_y[$joint_name_val] = $offset_y;
				$offset_joint_z[$joint_name_val] = $offset_z;

				$offsetx = $offsetx + $offset_joint_x[$joint_name_val];
				$offsety = $offsety + $offset_joint_y[$joint_name_val];
				$offsetz = $offsetz + $offset_joint_z[$joint_name_val];

				if ($joint_name_val != 0)
					if ($joint_name[$joint_name_val] == "Site")
						$joint_name[$joint_name_val] = "Effector" + $joint_name[$joint_name_val-1];
					$last_joint_name_val = $joint_name_val;
					$make_joint_cmd = "joint -n "+ $joint_name[$joint_name_val]+ " -p " + $offsetx + " " + $offsety + " " + $offsetz;
					$sel_joint_cmd = "select -r " + $joint_name[$joint_name_val-1];
					$ord_joint_cmd = "setAttr " + $joint_name[$joint_name_val-1] + ".rotateOrder 2";


				$joint_name_val ++;


		if ($nextWord == "}")	{
				$joint_name_val --;
				$offsetx = $offsetx - $offset_joint_x[$joint_name_val];
				$offsety = $offsety - $offset_joint_y[$joint_name_val];
				$offsetz = $offsetz - $offset_joint_z[$joint_name_val];


 		if ($nextWord == "CHANNELS") {

				int $tmp = `fgetword $fileId`;
				for ($i = 1; $i	<= $tmp; $i++)	{
						string $tmp2 = `fgetword $fileId`;
						switch ($tmp2)	{
								case "Xposition" :
									$ch_tmp = "translateX";

								case "Yposition" :
									$ch_tmp = "translateY";

								case "Zposition" :
									$ch_tmp = "translateZ";

								case "Xrotation" :
									$ch_tmp = "rotateX";

								case "Yrotation" :
									$ch_tmp = "rotateY";

								case "Zrotation" :
									$ch_tmp = "rotateZ";

						$index_channel[$index_ch] = $index_joint[$index] + "." + $ch_tmp;


					$index ++;


	    if ($nextWord == "MOTION") {

			$nextWord = `fgetword $fileId`;

			if  ($nextWord == "Frames:") {
				$frames = `fgetword $fileId`;

			$nextWord = `fgetword $fileId`;
			$nextWord = `fgetword $fileId`;

			if ($nextWord == "Time:") {
				$frame_interval = `fgetword $fileId`;

			$nextWord = `fgetword $fileId`;
			float $fRotation[3];
                        int $axies=0;
			for ( $k = 1; $k <= $frames; $k++) {

				currentTime $k;
				print ("currentTime " + $k + "\n");
				for ($chan = 0; $chan < size($index_channel); $chan++)
					setAttr $index_channel[$chan] 0;

				for ($j=1; $j<=$index_ch; $j++) { float $value = $nextWord; string $buffer[]; tokenize $index_channel[$j-1] "." $buffer; switch ($buffer[1]) { case "translateX": case "translateY": case "translateZ": setAttr $index_channel[$j-1] $value; setKeyframe -at $buffer[1] $buffer[0]; break; case "rotateX": $fRotation[0] = $value; $axies=$axies+1; break; case "rotateY": $fRotation[1] = $value; $axies=$axies+2; break; case "rotateZ": $fRotation[2] = $value; $axies=$axies+4; break; break; }; if ($axies==7) { setRotation ($buffer[0], $fRotation); $axies=0; } if ($axies>7) {
                                   print("Some kind of axial problem.\n");
				if ($k >= 40 && $k <= 50 )
					print ("\tsetAttr " + $index_channel[$j-1] + "  " + $value + ";\n");
				if($j<$index_ch) {
				   $nextWord =`fgetword $fileId`;


				$time_count += ($frame_interval*30);
				$nextWord = `fgetword $fileId`;

		currentTime -20;
		// make up the initial pose
		for ($chan = 3; $chan < size($index_channel); $chan++)
			string $buffer[];
			tokenize $index_channel[$chan] "." $buffer;
			setAttr $index_channel[$chan] 0;
			setKeyframe -at $buffer[1] $buffer[0];

		$nextWord = `fgetword $fileId`;

select -cl;
fclose $fileId;
autoKeyframe -st $lbAuto;		//Restore auto keyframing mode.

* Export SL Sculptie

//  * Copyright (c) 2007-$CurrentYear$, Linden Research, Inc.
//  * $License$

global proc string llFirst(string $list[])
	return $list[0];

global proc llSculptExport(string $object, string $file_name, string $file_format,
						   int $resolution_x, int $resolution_y,
						   int $maximize_scale, int $fix_orientation, int $flip_horizontal)
	// copy it, because we're going to mutilate it.  MUHAHAHAAAaa...
	string $object_copy = llFirst(duplicate($object));

	// disentangle from groups
	string $parents[] = listRelatives("-parent", $object_copy);
	if (size($parents) != 0)
		$object_copy = llFirst(parent("-world", $object_copy));

	// scale it to unit cube
	float $bounding_min[3];
	float $bounding_max[3];

	$bounding_min = getAttr($object_copy + ".boundingBoxMin");
	$bounding_max = getAttr($object_copy + ".boundingBoxMax");

	float $scale[3];
	int $i;
	for ($i = 0; $i < 3; $i++)
		$scale[$i] = $bounding_max[$i] - $bounding_min[$i];

	float $scale_max = 0;
	for ($i = 0; $i < 3; $i++) if ($scale[$i] > $scale_max)
			$scale_max = $scale[$i];

	if ($maximize_scale)
		print($object + " scale normalized - scale by " +
			  $scale[0] + " " + $scale[1] + " " + $scale[2] + " inside SL to get original shape\n");
		for ($i = 0; $i < 3; $i++)
			$scale[$i] = $scale_max;

	scale("-relative", 1/$scale[0], 1/$scale[1], 1/$scale[2], $object_copy);

	// position it in unit cube
	$bounding_min = getAttr($object_copy + ".boundingBoxMin");
	$bounding_max = getAttr($object_copy + ".boundingBoxMax");
	float $center[3];
	for ($i = 0; $i < 3; $i++)
		$center[$i] = ($bounding_min[$i] + $bounding_max[$i]) / 2.0;

	move("-relative", 0.5 - $center[0], 0.5 - $center[1], 0.5 - $center[2], $object_copy);

	// nurbs surfaces can be adjusted to ensure correct orientation

	if ($fix_orientation)
		string $shape = llFirst(listRelatives("-shapes", $object_copy));
		if ((nodeType($object_copy) == "nurbsSurface") ||
			(($shape != "") && (nodeType($shape) == "nurbsSurface")))
			// try to determine the "north pole";
			float $pole[] = pointOnSurface("-turnOnPercentage", 1,
										   "-parameterU", 0.5,
										   "-parameterV", 0,
			float $total_distance = 0;

			float $v;
			for ($v = 0; $v <= 1; $v += 0.1)
				float $point[] = pointOnSurface("-turnOnPercentage", 1,
												"-parameterU", $v,
												"-parameterV", 0,
				float $distance = 0;
				int $i;
				for ($i = 0; $i < 3; $i++) $distance += pow($pole[$i] - $point[$i], 2); $distance = sqrt($distance); $total_distance += $distance; } if ($total_distance > 0.1)  // the points don't converge on the pole - swap
				print("swapping UVs to orient poles for " + $object + "\n");
				reverseSurface("-direction", 3, $object_copy);

			// now try to ensure the normal points "out"
			// note: this could easily fail - but there's no better way (i think.)

			float $total_orientation = 0;
			float $u;
			for ($u = 0; $u <= 1; $u += 0.1)
				for ($v = 0; $v <= 1; $v += 0.1)
					float $point[] = pointOnSurface("-turnOnPercentage", 1,
													"-parameterU", $u,
													"-parameterV", $v,
					float $normal[] = pointOnSurface("-normal",
													 "-turnOnPercentage", 1,
													 "-parameterU", $u,
													 "-parameterV", $v,

					// check the orientation of the normal w/r/t the direction from center

					float $center_dir[];
					for ($i = 0; $i < 3; $i++)
						$center_dir[$i] = $point[$i] - 0.5;

					float $orientation = 0;  // dot product
					for ($i = 0; $i < 3; $i++) $orientation += $center_dir[$i] * $normal[$i]; $total_orientation += $orientation; } if ($total_orientation > 0)  // need to invert
				print("reversing V for " + $object + "\n");
				reverseSurface("-direction", 1, $object_copy);
			if ($flip_horizontal)
			// reverse U, for compatibility with surface textures
			print("reversing U for " + $object + "\n");
			reverseSurface("-direction", 0, $object_copy);
			warning("cannot fix orientation on non-nurbs object: " + $object);

	// create temporary shading network
	string $sampler_info = createNode("samplerInfo");

	print("exporting sculpt map for " + $object + " into file " + $file_name + "\n");
	// bake sculpt texture
	string $fileNodes[] = convertSolidTx("-fileImageName", $file_name,
										 "-fileFormat", $file_format,
										 "-force", 1,
										 "-resolutionX", $resolution_x,
										 "-resolutionY", $resolution_y,
										 $sampler_info + ".pointWorld",

	delete($fileNodes);  // we don't want 'em.  why do you make 'em?


global proc int llSculptEditorCallback()
	string $objects[] = ls("-sl");

	if (size($objects) == 0)
		warning("please select objects to export");
		return 0;
	string $filename      = textFieldButtonGrp("-query", "-fileName", "llSculptEditorFilename");
	int $resolution_x     = intSliderGrp("-query", "-value", "llSculptEditorResolutionX");
	int $resolution_y     = intSliderGrp("-query", "-value", "llSculptEditorResolutionY");
	int $fix_orientation  = checkBoxGrp("-query", "-value1", "llSculptEditorFixOrientation");
	int $flip_horizontal  = checkBoxGrp("-query", "-value1", "llSculptEditorFlipHorizontal");
	int $maximize_scale   = checkBoxGrp("-query", "-value1", "llSculptEditorMaximizeScale");

	// get filetype
	string $file_type;
	string $file_base;
	string $file_extension;
	string $tokens[];
	tokenize($filename, ".", $tokens);

	if (size($tokens) == 1)  // no extension, default to bmp
		$file_base = $filename;
		$file_type = "bmp";
		$file_extension = "bmp";
		$file_extension = $tokens[size($tokens) - 1];
		int $i;
		for ($i = 0; $i < size($tokens) - 1; $i++) { $file_base += $tokens[$i]; if ($i != size($tokens) - 2) $file_base += "."; } if ($file_extension == "bmp") $file_type = "bmp"; else if (($file_extension == "jpg") || ($file_extension == "jpeg")) $file_type = "jpg"; else if (($file_extension == "tif") || ($file_extension == "tiff")) $file_type = "tif"; else if ($file_extension == "tga") $file_type = "tga"; else { warning("unknown image type (" + $file_extension + "). switching to bmp"); $file_type = "bmp"; $file_extension = "bmp"; } } string $object; for ($object in $objects) { string $this_filename = $file_base; if (size($objects) > 1)
			$this_filename += "-" + $object;
		$this_filename += "." + $file_extension;

		llSculptExport($object, $this_filename, $file_type, $resolution_x, $resolution_y,
					   $maximize_scale, $fix_orientation, $flip_horizontal);

	return 1;

global proc llSculptEditorSetFilenameCallback(string $filename, string $filetype)
	textFieldButtonGrp("-edit", "-fileName", $filename, "llSculptEditorFilename");

global proc llSculptEditorBrowseCallback()
	fileBrowser("llSculptEditorSetFilenameCallback", "Export", "image", 1);

global proc llSculptEditor()
	string $commandName = "llSculptExport";

	string $layout = getOptionBox();
	setParent $layout;


	setUITemplate -pushTemplate DefaultTemplate;


	tabLayout -tabsVisible 0 -scrollable 0;

	string $parent = `columnLayout -adjustableColumn 1`;

	separator -height 10 -style "none";

		-label "Filename"
		-fileName "sculpt.bmp"
		-buttonLabel "Browse"
		-buttonCommand "llSculptEditorBrowseCallback"

		-field on
		-label "X Resolution"
		-minValue 1
		-maxValue 512
		-fieldMinValue 1
		-fieldMaxValue 4096
		-value 64

		-field on
		-label "Y Resolution"
		-minValue 1
		-maxValue 512
		-fieldMinValue 1
		-fieldMaxValue 4096
		-value 64

		-label ""
		-label1 "Maximize scale"
		-numberOfCheckBoxes 1
		-value1 off

		-label ""
		-label1 "Flip Horizontal"
		-numberOfCheckBoxes 1
		-value1 on

		-label ""
		-label1 "Correct orientation"
		-numberOfCheckBoxes 1
		-value1 on

	setUITemplate -popTemplate;

	string $applyBtn = getOptionBoxApplyBtn();
	button -edit
		-label "Export"
		-command "llSculptEditorCallback"

	string $applyAndCloseBtn = getOptionBoxApplyAndCloseBtn();
	button -edit
		-label "Export and Close"
		-command "llSculptEditorCallback"

	setOptionBoxTitle("Export Sculpt Texture");

	setOptionBoxHelpTag( "ConvertFileText" );




* Select all joints in hierarchy

select -hi root;

Tutorial: PBR Texture Conversion

Tutorial: PBR Texture Conversion

By Joe “EarthQuake” Wilson

In this tutorial, I’m going to demonstrate how content created for traditional shaders can be converted to PBR shaders, how to convert content from one PBR workflow to another, and explain the various differences in modern workflows. This tutorial is intended for intermediate to advanced users, so be sure to read the previous two PBR tutorials that Jeff Russell and I wrote as the base concepts are explained in great detail and may only be briefly mentioned here.

Table of Contents

  • PBR: Misconceptions and Myths
  • PBR: What Has Changed?
  • Traditional Content Recap
  • Conversion: Traditional -> PBR Specular
  • Metalness Workflow vs Specular Workflow
  • Conversion: Specular -> Metalness
  • Conversion: Metalness -> Specular
  • Comparisons and Disclaimers
  • Material Logic

PBR: Misconceptions and Myths

Before we get started, I want to clear a few things up. There is a lot of confusion in terms of what physically based rendering actually is, and what sort of texture inputs are required in a PBR system.

First off, using a metalness map is not a requirement of PBR systems, and using a specular map does not mean an asset is “not PBR”. I see comments about this regularly on forums, when someone sees an artist creating a specular and gloss map they often ask “Why aren’t you using PBR?”, so lets break down what PBR actually is.

PBR in the most basic sense is a combination of sophisticated shaders that represent the physics of light and matter, along with art content that is calibrated using plausible values to represent real world materials. PBR is essentially a holistic system of content creation and rendering, which can and often does have variances (generally shader models or texture input types) in actual implementation, depending on what tools or engine you use.

Additionally, loading any old content into a PBR shader does not guarantee physically accurate results. I see this misconception rearing its head equally as often as the “Why not PBR?” one mentioned above. Fancy shaders are only half of the equation, you also need logically calibrated art content.

One last thing, is it diffuse or albedo? These two terms mean essentially the same thing, the base color of an object, and are often used interchangeably.

PBR: What Has Changed?

To fully understand how to create or convert content intended for PBR systems, it’s important to look at how shaders have changed. One of the biggest differences is how advanced the lighting calculations are in modern shaders. Today we have dynamic light sources that cast realistic shadows, and image-based lighting that provides accurate ambient diffuse and specular reflections. This means that we no longer have to paint lighting, reflection or shadow content directly into our textures. Now more than ever, we can focus on replicating material properties rather than baking in specific lighting conditions.

Additionally, linear space rendering means we no longer have to color specular maps the opposite color of our diffuse to get a neutral white highlight, while energy conservation in the microsurface function (rougher surfaces will have broader highlights with a dimmer appearance as the light is dissipated over a larger area) removes the need to manually make rough areas dark and glossy areas bright in the spec map. This means specular maps will generally contain little more than flat values (greyscale for insulators, colored for some metals) for each material type, while microsurface maps should define most of the surface variation.

Traditional Content Recap

To show the difference between traditional and modern shaders, I’m using a gun that I created for Darkest of Days. This asset is a great example because it showcases a lot of techniques that I typically wouldn’t use in a PBR pipeline, for instance:

  1. The diffuse is too dark, this was likely tweaked to look good in a specific lighting environment, a big no-no.
  2. Ambient occlusion and cavity detail is baked directly into both the diffuse and specular map. AO/cavity content should be added via a separate input so the shader can use them in a more intelligent way. Additionally, large scale AO should not be added directly to the specular pass, as occluded light isn’t the same thing as a less reflective surface – that’s what the specular map defines.
  3. A gradient map is baked into the diffuse and specular map as well. Gradient maps can be handy tools to create masks for localized effects (eg, dirt on the lower areas of a character); however, they shouldn’t be multiplied directly on your texture.
  4. The shader this asset originally used didn’t support gloss maps. This means the specular map had to do double duty trying to represent both reflectivity and microsurface, while using a uniform glossiness value for the entire material.
  5. The specular values were set up by eye, rather than accounting for real world material properties. As a result, the black painted metal is too reflective, and has a slight yellow tint for no apparent reason, while the plastic and rubber materials are not reflective enough.

Texture Conversion: Traditional -> PBR Specular

Now that we understand the common differences between traditional and PBR shaders, we can update the content to work in a physically based specular workflow.

First off, I removed all of the baked lighting and gradient content from the albedo and specular maps. The AO and cavity content was moved to two separate textures and hooked up in the appropriate slot in the shader. After that, I brightened the diffuse map to a more reasonable value.

From there, I split the specular content into a gloss map and a specular map. I moved all of the surface variation from old specular map into the newly created gloss map and updated the base values to represent the microsurface structure of each material. If you already have a gloss map, you should double check your values and make sure they make sense. For instance, the underlying raw metal on a rifle would generally be smoother/glossier than the matte finish of the coating, while scratches in the finish of a glossy paint may reveal the primer below which would be rougher.

With most of the texture variation moved from the spec map to the gloss, we can focus on the reflectivity values. At this this point it’s very important to identify what is and is not metal (yes, even if you’re not using the metalness workflow). The reason for this is simple, insulators tend to have uncolored reflectance values around 4% linear (or #383838 sRGB), with min and max values generally in the 2-16% range (though few insulators other than gemstones are > 4%), while pure metals have much higher reflectance values, generally in the 70-100% range. Thus, figuring out exactly what type of material you’re representing is very important when it comes to finding the right reflectance value.

PROTIP: A metal object that is painted or coated by a different substance is considered an insulator when it comes to picking values, only where that surface has been worn off would it be metallic.

Metalness Workflow vs Specular Workflow

Before we go any further, it’s important to understand the primary differences between the metalness and specular workflows. Most game engines support one or the other; however, Toolbag 2 supports both, which allows us to compare their merits directly.

The biggest difference between the two workflows is how the diffuse and reflectivity content is defined in the texture maps. In the specular workflow, these values are set explicitly with two unique texture maps.

The metalness workflow on the other hand, uses the albedo map to define both the diffuse and reflectivity content and the metalness map to define whether the material is an insulator or a metal. The reason for this is that metals conduct electricity, which means that most photons (which are electromagnetic waves) reflect off the surface, and any photons that pass through the surface are absorbed rather than diffused, so metals typically do not have a diffuse component. Insulators on the other hand reflect a very small amount of light (~4%) and much of the of light that hits the material diffuses or bounces around the surface creating an even distribution of color.

In practice, this means that much or even all (if your texture has only metals or insulators but not both) of either the diffuse or specular map will be wasted information, so the metalness workflow is usually more efficient. However, one of the drawbacks to storing both diffuse and specular content in the same texture is artifacts along material transitions.

Gloss and roughness maps define the same information, but usually on an inverse scale. With gloss maps, bright values typically define smooth/glossy surfaces, while roughness maps typically use bright values to define rough/matte surfaces. In some regions, the word glossiness is a synonymous with reflectivity, so some people think roughness is a less confusing word to use. The important thing here is not what the map is called but what the values represent, if in doubt, talk to your technical artists or engineers.

Pros of the specular workflow

  1. Diffusion and reflectance are set directly with two explicit inputs, which may be preferable to artists who have experience working with traditional shaders.
  2. More control over reflectivity for insulators is provided with a full color input.

Cons of the specular workflow

  1. Easy to use illogical reflectance values which gives inaccurate results.
  2. Uses more texture memory than the metalness workflow.
Pros of the metalness workflow

  1. The albedo map defines the color of the object no matter the type of material, which may be easier for artists to understand conceptually.
  2. Simplifies materials into two categories, insulators and metals, which may make it more difficult to author content with unrealistic texture values
  3. Uses less texture memory than full color specular workflow

Cons of the metalness workflow

  1. Material transition points cause white line artifacts
  2. Less control over reflectivity for insulators*
  3. If artists do not understand workflow, it’s easy to use illogical values in metalness map and break the system

*Some metalness workflows provide both a metalness map and a secondary specular map to control insulator reflectivity. This is not currently possible to set up in Toolbag 2.

Some people claim that the metalness workflow is easier to understand; but personally, I think it’s about even. Each workflow can be broken when artists use illogical content, and depending on your experience, one may be easier than the other to pick up. Essentially, neither method is objectively better than the other, they are simply different.

Texture Conversion: Specular -> Metalness

makemetal01Now that we have properly calibrated content, and know the difference between the two workflows, it is actually very simple to convert the maps.

First off, create a metalness mask by assigning all of your materials either a black (non metal) or white (metal) value depending on what the surface is. If you have the PSD for your texture, this can be done quickly by using the mask content from various layers to build the metalness information. Your metalness map should be mostly white and black with gray values only for effects that have soft transitions like dirt, dust, rust, etc. Grey values can also be used for materials that partially metallic; however, these are generally quite rare. Usually, when a metal object has any sort of coating, it acts as an insulator.

Once you have your metalness map, create a new file in Photoshop and add your diffuse map as the background layer. Then, add the specular layer on top, and add a layer mask. Paste your metalness map into the specular layer’s layer mask. What you should see is the specular content where the metallic surfaces are and the diffuse content where the insulators are, which means you have a proper albedo map for the metalness workflow.

Texture Conversion: Metalness -> Specular

Converting to the specular workflow from the metalness workflow is easy as well. We simply need to split the diffuse and specular information from the albedo map into explicit diffuse and specular textures.

Diffuse Map

  1. Load your albedo map into Photoshop
  2. Create a new fill layer that is black (#000000)
  3. Paste your metalness map into the layer mask of your fill layer
Specular Map

  1. Make a duplicate of the original albedo map and move it above the fill layer
  2. Create another fill layer with a value of #383838
  3. Paste your metalness map into the layer mask of your fill layer
  4. Invert the layer mask

Comparisons and Disclaimers

Now we can compare the effectiveness of the two different conversion workflows explained above.

One thing that is important to note, because the base content was first calibrated to reasonable values, the conversion process worked very well. If your base content was not calibrated, you will see much bigger differences in the conversion. Similarly, if you’re representing certain materials, like insulators that have colored reflections (which are generally quite rare, and usually best represented with custom shaders for the typical ones like hair or iridescent materials) you will lose some information in the conversion process.
Ideally, you should create content for the target rendering system as well, and only rely on converting if you’re switching systems and updating old content or need to create content for multiple systems.

Material Logic

So what do I mean by reasonable or calibrated values? Unfortunately this is a difficult question to answer, it varies heavily depending on exactly which type of material you’re trying to represent. Instead of creating a cheat sheet, I will try to explain something I like to call material logic.

Sticking to charts and scan data can be a bit of a crutch. Firstly, you will not be able to find data for every material you wish to recreate, so you need to use some logic to decide for yourself what values are appropriate. Secondly, the surface qualities of the same material type can differ significantly depending on a variety of conditions like age, wear, finish, etc.

The first step is to figure out what sort of material an object is made of. For many objects this is straightforward, you can tell by simply looking at a photo that it is made of metal, plastic, rubber, etc. However, other objects may be more complex or may contain parts that are made from a variety of materials. Here, research is your friend, find reference images or similar real world objects to study, and even look into how specific objects are manufactured.

Once you know what it’s made out of, you can start to come to various conclusions. For instance, metals are much more reflective than insulators, rubber is generally rougher than plastic, concrete typically has a brighter albedo than asphalt, and so on. Most of this can be figured out through simple observation.

At this point, you can think more specifically about the qualities you want your texture to have. For instance, paints tend to come in a variety of different finishes from matte to glossy, plastics as well, and metals will vary quite a bit depending how polished the surface is.

In conclusion, working with a physically based rendering system requires a lot of the same artistic skills that creating game art has always relied upon, like the ability to observe and recreate materials from the world around you. It’s important to understand the underlying concepts of a PBR system; however, at the end of the day you need to trust your artistic instincts.