Resolution: what's reasonable?

Joshua Langman's picture

I know that the answer depends on what you're printing and what you're printing it with, but as a designer I'd like to be more confident about what's a reasonable resolution for producing a file for a certain context. This mostly applies to images, because type, of course, remains as curves in PDFs, but I'm curious about type as well: even if the resolution is determined only when it gets to the press, what will that resolution be? So I'm asking just for some examples of the resolution of different printed materials: things made on digital presses vs. offset presses; newspapers vs. magazines. How about those coffee table art books? I haven't found a handy reference for this sort of thing.

When I am scanning or exporting imagery I usually do it at the highest DPI possible, even if I know that the printer isn't capable of maintaining that DPI. Is there any detriment to this? (Other than enormous files, which some people mind but have never bothered me.)

And is it just me, or can you get away with much lower resolution on images than on type? The same printer that producers gorgeous, detailed photographs can't seem to make type look smooth enough.

All thoughts are welcome!

hrant's picture

{To Follow}

kentlew's picture

For images destined for print, the thing that drives the equation is the the halftone linescreen (and to a lesser extent, the relationship between linescreen and the output resolution of the rendering device).

Generally the rule of thumb is that you need at least 1:1 image resolution to linescreen, and more than 2:1 image resolution to linescreen is a waste of file space.

Since a majority of commercial printing is done at 150 linescreen, the rule of thumb for images is 300 ppi at actual size.

If you have more than 2 pixels for every halftone dot, then you’re going to end up having pixels that “fall through the cracks,” so to speak, and never end up influencing a printed dot at all.

One could argue, I suppose, that with 4c you could use greater resolution because the different screen angles have dots falling at different interstices. But I think you’d be hard pressed to find any difference.

An advantage of optimizing image resolution for targeted linescreen (as opposed to just providing maximum resolution, if it’s greater than, say 2.5x linescreen) is that this allows you to more carefully target any unsharp masking values in order to increase perceived sharpness and clarity. At too high an image resolution in relation to linescreen, any unsharp masking will get lost “between the cracks” and the image will not be as sharp.

Coffee table art books are usually printed at 175 linescreen; sometimes higher. It depends on the stock and the press and how fine a dot it’s possible to hold reliably. The finer the linesceen, the smoother and more detailed an image.

Some high-end printing uses stochastic screening (aka frequency modulation), which is a whole different beast. Stochastic allows for much finer placement of dots. But again, the dot size is going to be determined by press & paper conditions. I don’t recall offhand the math for resolution for stochastic. Depending upon your dot, though, you can figure that higher image resolutions may be of value.

kentlew's picture

Oh, forgot to address the relationship of halftone linescreen to output device resolution.

The relationship between the two will determine the number of possible gradations between white (no dot) and solid. If the output device resolution is insufficient to allow for an adequate range of dots between 0 and 100%, then there will be perceptible banding.

For instance, if you have an output resolution of 600 dpi (a desktop laserprinter, for instance), and you print an image at 150 lpi, then each dot of your halftone will be composed of a maximum of 16 toner dots: 600÷150 = 4 dots in each dimension → 4×4 = 16 toner dots per halftone dot.

Therefore, you can have only 16 different values in gradation from none to black (well, 17 if you count white). With only 16 levels of grey + white, that’s a 6.25% jump between each level of grey, and you’re going to have a posterized reproduction.

This is a somewhat simplified and abstracted example, since it assumes essentially non-round dots. To produce rounded dots in an adequate range of values, the math is a little different, depending upon the algorithm for shaping the dots.

But you get the idea.

So, in addition to being constrained by press/paper conditions, optimum halftone linescreen for a job may be influenced by the resolution of the output device that is producing the film or plate. Which in turn determines your optimum image pixel resolution.

Theunis de Jong's picture

One could argue, I suppose, that with 4c you could use greater resolution because the different screen angles have dots falling at different interstices. But I think you’d be hard pressed to find any difference.

The dots for all colors are spaced out exactly the same. It may appear they do not if you are examining a horizontal line; but the only difference is that each color is printed at another angle to prevent Moiré.

I feel it's the other way around: a full-color image may touch the lower bottom of "acceptable resolution" because it is to be printed full-color. Any slightly visible pixelation on one color plate will surely be masked by three other plates, each with a slightly different offset -- and thus slightly different pixelation.

If you consider pure black-and-white images -- with or without type -- go as high as you possibly can. Monochrome bitmaps (of the only black-and-white kind, not a true-color image that just happens to use no more colors) will be passed on to a printer at their original resolution. A 300-dpi monochrome bitmap will look crystal clear in print, with pixels that are visible to the naked eye; if I need a bitmap image, I'm going for broke, at 800 dpi or more. Now that usually renders my bitmaps razor sharp.

(There is a physical upper limit: if you approach the resolution of the output device, say, a plate-maker, your tiniest details might get lost because this bitmap has to be aligned with the device's own pixels. But you are talking about something like 3000 dpi, for a typical industry standard plate-maker. Each pixel will measure less than 1/100th of a millimeter.)

kentlew's picture

> It may appear they do not if you are examining a horizontal line

Exactly. And the imagesetter basically renders along that line. So it could be argued that a pixel which would fall between dots on the black plate, would fall on the dot of the magenta plate (just as a hypothetical example).

I don’t buy the argument, myself, since the likelihood of changing the magenta dot significantly is miniscule.

> I feel it's the other way around: a full-color image may touch the lower bottom of "acceptable resolution" because it is to be printed full-color. Any slightly visible pixelation on one color plate will surely be masked by three other plates,

Yes, that’s the other side of the argument, and perhaps more plausible.

cerulean's picture

For black and white line art, 300ppi is a common standard, and it's okay, but 600ppi is noticeably better to the naked eye (at least to mine), especially if there is type or lettering in the image. Before computers, screening a greyscale image would fuzz out any solid black in it, but modern output can print halftone greys and preserve sharp black lines in the same image. The experience I speak from is in publishing comic strips. As you say, continuous-tone photographs do not require nearly as much fidelity, because the difference between one pixel and the next is indiscernible when they're close to the same color, plus they are entirely subject to the linescreen.

JamesM's picture

> The same printer that producers gorgeous, detailed
> photographs can't seem to make type look smooth enough

By any chance are you referring to type that overprints photos? In some situations type can be accidentally rasterized when it overprints a photo, resulting in lower type quality. If you're using InDesign, putting the type on a separate layer can help prevent that.

Joshua Langman's picture

Thanks for the insight!

James, I'm not talking specifically about this. I mean that 600 dpi prints a decent photo but not very nice type.

I'm also sightly more confused than I was when I started, but maybe that's a good thing. My confusion might have to do with lpi versus dpi, which I now, thanks to kentlew, have a much better understanding of. But still, the examples that have been given so far seem to be using numbers that sound to me extremely low. I have a friend who's a designer at a publisher of fine art books and said that all digital images had to be something like a minimum of 20,000 dpi and an ideal of 50,000 — and yes, that's the right number of zeroes. Is this logical? Is there any printing process that preserves that resolution?

How about resolution as it applies to solid type, not halftones? (dpi, not lpi) What do you consider ideal? What's standard?

Joshua Langman's picture

Ah, another thought. Are halftone dots ever done such that the lpi equals the dpi? So one dot is one "pixel" of resolution? This would seem to be the most logical linescreen, but would it work?

EDIT: Wait, that's a stupid question, because the dots need to be different sizes to make different shades. Duh. Never mind.

Birdseeding's picture

something like a minimum of 20,000 dpi and an ideal of 50,000

I would say something is very wrong here indeed, because printing issues notwithstanding the latter would require a camera at 2500 MP (2,5 gigapixel) camera for a one-square-inch photograph, which is way beyond the limits of today's capabilities.

kentlew's picture

> something like a minimum of 20,000 dpi and an ideal of 50,000

I can’t even imagine a image reproduction process that would retain much of that extra information.

kentlew's picture

So, what about type? In this case, you have a completely different set of considerations.

The two main effects of output device resolution will be resolvable differences and smoothness. These are basically the same issues you encounter with type on the screen, except in print the resolutions are higher and there isn’t any anti-aliasing (gray levels to increase perceived smoothness).

Average-quality imagesetters generally run in the range of 1270 dpi. (High-quality imagesetters will generally run about double that, maybe more.)

Let’s take the example of an average text type output from an average 1270 dpi imagesetter. And let’s say that we’re rendering a font with 1000 upm (standard Postscript em).

The full 11-point height of your em will be made up of 194 imaging dots: 11pt ÷ 72 pts/inch = 0.1527 inches; × 1270 dots/inch = 194 dots.

Since the full 11-point height of your font is defined as 1000 em units, that means that the diameter of a single toner dot is equivalent to about 5.15 em units.

What that means is that the resolvable difference is about 5 em units. Depending upon where your outlines fall, you probably won’t be able to render a difference between a stem of 84 units and one of 87 units, for example, or a serif height of 30 vs 34, perhaps.

(This is oversimplifying a bit, because there are several tricks built into the type rendering algorithms of Postscript imagesetters.)

There will also be nuances of curves that are similarly undifferentiatable because the differences fall beneath the threshold of device resolvability.

On that same 1270 dpi imagesetter, however, anything above about 57 pts will be rendered with full resolvability, since the 1000 upm will now be rendered with at least 1000 imaging dots.

kentlew's picture

As I said, the other characteristic will be smoothness.

A vertical line is no problem. Perfectly smooth, regardless of resolution. But as you deviate from vertical or horizontal, you will get stair-stepping; how much and to what degree will depend upon how fine the imaging dot is (which is to say, how high the resolution is).

The higher the resolution, the finer the stair-stepping and the potentially smoother an angled line or curve.

At angles with smaller deviation from pure vertical or horizontal, this will be more apparent. This is just a basic fact of approximating an angle or curve on any grid.

Let’s say you have an ‘o’ in our hypothetical 11-point text face rendered by our hypothetical 1270 dpi imagesetter. And let’s say that the x-height is around 500 units (on the larger side for a text face, imo, but easier math ;-)

This ‘o’ will be rendered with about 97 imaging dots, in the vertical dimension. (Well, okay, maybe 100 dots with overshoot.) Will that be smooth enough?

Blown up like this and at full monitor pixels, maybe not. But at physical size on paper? Probably.

What about the same at 600 dpi output?

For just plain reading, and for the average person: probably. For a typophile examining closely: probably not.

But remember, also: stair-stepping and jaggedness in the rendering can be ameliorated by aspects of the physical reproduction process — e.g., ink spread in offset printing, or droplet scatter in inkjet printing.

This ‘o’ is a broad curve, with plenty of surface for rendering. What about a serif bracket? Or what about a stem with subtle swelling?

What’s enough? One of those reading-science types will have to weigh in with how fine a difference the average human eye can resolve — i.e., how large those jaggies have to be at what distance, etc.

gargoyle's picture

I have a friend who's a designer at a publisher of fine art books and said that all digital images had to be something like a minimum of 20,000 dpi and an ideal of 50,000

If your friend was speaking in terms of dots per square inch, those numbers come much closer to reality.

Theunis de Jong's picture

Gargoyle: that works out to a perfectly normal range of 140 ~ 225 dpi. Oof! First I thought I was mis-reading those numbers, then I thought I was lagging behind with my technical expertise. About half a century or so.

What a weird way to express "resolution".

quadibloc's picture

I know that from my experience, 200 dpi on an electrostatic printer/plotter was still very imperfect, with the stairstep nature of curves clearly visible - but at 300 dpi, on an early laser printer, the result looked a lot like "real" typewritten or even typeset copy, although the edges were still visibly a bit "soft" rather than crisp.

This is not to contradict what was noted in another thread - that even 600 dpi is not adequate to do justice to (at least some) typefaces, but 1200 dpi produces results that (apparently) stand up to comparison with metal type. But it is to note that below 300 dpi, one will run into problems even with unsophisticated readers, because the limitations in quality become obvious.

Joshua Langman's picture

Just to throw another number out — Bringhurst mentions printing type by offset lithography at 2800 dpi.

kentlew's picture

Just to be clear, he’s not talking about the resolution of the offset lithography, per se. What he means is the offset plates were created via a high-quality imagesetter. Does he mention whether from films or computer-to-plate?

Joshua Langman's picture

He says "text set directly to plate," from which I assume he means digital computer-to-plate technology.

kentlew's picture

Alright then. In that case, I should have said “via a high-quality platesetter.” That’s about as direct a translation of the outlines to print as you can get. 2800 is pretty hi-fidelity. For text sizes, it certainly helps to have the extra dots.

Syndicate content Syndicate content