The idea of non-photorealistic rendering gets to the very heart of what computer graphics is. That is, it isn't the goal of computer graphics to simulate light, it is the goal of computer graphics to convey meaning. (And, for that matter, it's the goal of your games to convey meaning.)
The key ideas of non-photorealistic rendering are abstraction -- remove unimportant detail -- ambiguity -- removing important detail deliberately -- and emphasis -- highlighting important detail.
The key problems in non-photorealistic rendering for games are temporal continuity (though whether you want it or don't is a question of art).
Just a few examples:
There is no wrong way to render non-photorealistically, except if you render photorealistically. Here are a few ingredients that are useful when constructing NPR renderings.
Even in an otherwise "realistic" pipeline, art direction can definitely create a non-photorealistic atmosphere; one can, e.g., filter and simplify textures to remove detail; play with lighting, fog, etc. to reduce or enhance contrast; or modify geometry to create abstract figures.
One of the simplest ways to get a non-photorealistic effect is to highlight depth and normal edges in a post-process filter (course notes). This is very effective, but has some downsides:
See, for example, this siggraph talk about borderlands: slides.
Bilateral Filtering blurs pixels that are similar while leaving pixels that are different as different. See: Real-Time Video Abstraction.
Reasonable real-time version of bilateral filter: Bilateral grid.
Thresholding a stroke against a paper texture simulates how pigment is caught by peaks and misses valleys in paper (e.g. strokes in WYSIWYG NPR). Note that for watercolor this is actually opposite (see pooling in valleys). (E.g. watercolor.)
These result in the "Shower-Door effect", which can be desired and can be distracting.
You can also extract edges by looking at mesh edges -- remember stencil shadows? Additional wrinkle: if you need really smooth looking strokes, use a level set instead of mesh edges, build edge chains. (WYSIWG NPR, again).
With increasingly powerful graphics hardware, you can think about rendering paint strokes on objects. OverCoat: an implicit canvas for 3D painting (and follow-up work). Basically, this means splatting many many brush texture sprites over each-other (also need to depth sort).
What if we instead attached textures to objects as expected, but were able to select textures with more or less detail automatically? Turns out we can do exactly this! This is what mip maps do! See, for example, real-time hatching and Computer-Generated Pen-and-Ink Illustration.