Reality Check with a Modern Barn in UE4

I’ve been following this work by Guilherme Rabello since he first shared progress images and I must say this looks good. Very Good! It is super inspiring and now he shared a very informative post on TALK which I recommend all of you to follow.

Reality Check number one in relation to game engines being able to pull an ArchViz project was the Unreal Engine 4 + ArchViz tests by Koola. This sparked the process that ended up with The Vineyard challenge on the blog in partnership with Epic Games!

Factory Fifteen took the Grand Prize in that one and we all know what is possible in ArchViz with UE4 and other real-time tools since then which makes what Guilherme Rabello is doing even more impressive. It is the first time I’m seeing what is an ArchViz related world building on a large scale, taking care of the small details that look so photoreal by leveraging photogrammetry based scanned assets.

This represents work done in the recent couple of months being shared on Twitter and now also on TALK under the PROCESS category.

It is clear that it takes time and effort to get things to look like that with UE4. That is probably why such achivments are few and fer between.

Join Guilherme as he dives into his process…

Notable Replies

  1. rabellogp says:

    Thanks @ronen
    About your questions:

    Back to the old and good 3DS Max that I’ve been using for the last 15 years :laughing:

    My goal was to import the Sketchup project in 3DS Max and do all the necessary preparations for later exporting it to Unreal. By “preparations” I mean subdivide or merge meshes together to get decent surface area for each object and generate proper UVs for lightmaps.

    But that specific Skecthup model that I got was too messy. Lots of geometries with missing faces, walls and floors were not precisely placed (with gaps between them) etc… So I decided to remodel the house from scratch using the dwg blueprints as basis.

    That is an interesting thing to note in Unreal workflow for archviz. You can easily get away with “bad geometry” in offline renders. Mainly if you’re going to show only a small part of it. But for real-time in Unreal you want the geometries as “perfect” as possible to avoid headaches in the next steps of the workflow. UV for lightmaps, lightmass calculations, performance, real-time shadows… All of that depends on the geometry somehow.

    I like to use this website to find terrain data and download the heightmaps: http://terrain.party/
    For the satellite imagery I use Google Earth.

    Ok, let’s talk more about that because I think it is a major flaw in UE4.

    I can say that my baseline is the default UE4 values. If you place a skylight and a sunlight in your scene their default values are 1 for the sky and 3.14 for the Sun. If I’m not mistaken, Epic says that that proportion will get you physically accurate sunny daylight intensities.

    But you want a sky texture in your scene so you add a dome with a sky texture or a Atmospheric Fog actor. Now that skylight value of 1 only acts as a multiplier for the intensity of whatever it is grabbing light information from. Your sky intensity now depends on the HDR texture you’re using or the settings of your Atmospheric Fog actor or color of a third party sky blueprint…

    And now you need to tweak the intensity of that skylight source to get the physically correct light. But how do you do that? You could use a table like the Sunny 16 rule as basis. But for that to work you’d need a physical camera system. You have one in UE4, but if you set its exposure parameters to match the Sunny 16 table you’ll get a pitch black image because the values used in the Sun/Sky lights don’t make physical sense… You could increase them (a lot) until you start seeing something and use a real photo as basis to discover some proportion between UE4 units and Lux per example. And you can do that indeed but then you’ll start getting glitches all around (with reflection probes and auto exposure) because those other systems clearly are not prepared to work with such high light values.

    The user Daedalus51 has a great post where he brilliantly discusses about that in Epic’s forum: https://forums.unrealengine.com/development-discussion/rendering/1414326-4-19-physical-lights?p=1454629#post1454629 (That guy is awesome by the way. He works at Dice and has a series in Youtube about lighting in UE4. Lots of precious information there: https://www.youtube.com/user/51Daedalus/videos)

    In the end what I do is to use real photo references to eyeball the correct light intensities while keeping the sunlight and skylight values at their “normal” range in the engine.

    • I start setting a Post Process volume with no effects at all (0 bloom, 0 vignette, fixed exposure…).
    • I place a sunlight with a value similar to its default (something around 3.14) to make sure all the other systems will behave properly.
    • I place my skylight and set a skydome with the sky material (unlit) I want.
    • Sunlight position must be adjusted to match the Sun in the texture.
    • I plug a multiplier parameter to the sky texture, create a material instance and tweak it in real-time until I get the light intensities the way I want. Here I use the photo references (or sometimes only guessing).

    That’s with the preview lighting. After building lights things changes and another round is needed for tuning. Sometimes Post Process adjustments may be enough though.

    Each change in light condition requires at least a new round of that process. If I change Sun height and sky texture it’ll not be possible to guess the new values that will result in precise lighting for the new condition. And I’m the kind of guy that is constantly playing around with light until I find something that pleases me, so I spend a lot of time with that process.

    The project is huge but only the house and some elements around it are static. So I wouldn’t expect 12h of light building… My last interior project builds in 3~4h with same lightmass settings and has similar amount of static objects. That is a lot of room for optimization though. Mainly in lightmap resolution for those little meshes imported with Datasmith. Maybe I can improve that time if I can decrease the RAM usage so my system won’t use virtual memory cashing.

  2. rabellogp says:

    Thanks! Yes, I’m using DF Shadows and cascaded shadows.

    Cascaded will give you the definition you need for medium-small details nearby. You must tweak Dynamic Shadow Distance Stationary Light and Distribution Exponent to achieve what bests suits you. I also like to decrease Shadow Bias (default is 0.5 and I usually set it to 0.3 or 0.2) to increase definition. The trade-off is that you’ll start getting visible shadow banding so it must be tweaked with caution.

    Outside the Cascaded Shadows radius the DF Shadows take place (if they’re enabled, of course). They aren’t near as detailed as cascaded but they do a good job for medium-far distance foliage (and even buildings sometimes). I like to reduce Ray Start Offset Depth Scale to get some extra detail on them.

    shadows

    Sometimes I also use Contact Shadows to add some extra detail to small objects. It’s good for grass, but be you need to be careful because real lawns are denser than what you’re probably recreating in real-time and the individual blades shadows in real life won’t be so noticeable as it will in your sparser UE4 lawn (thus revealing a lot of contact shadows in a unrealistic way). Contact Shadows also adds artifacts that are very apparent in flat surfaces.

    About AO baked to vertex color, that’s a very handful SpeedTree feature. You can obviously use it for controlled AO in the materials but also as a mask to reduce specular, reduce subsurface and decrease color intensity in the interior of the tree. It gives you artistic power to compensate the absence of baked lighting in your foliage.

    For the grass material I use Masked - Two Sided Foliage (with the same vertex AO tricks I use on the trees).

    Thank you! It has some blueprint coding involved but the overall process is a lot more lazy than that. :laughing:

    I’ve used a Vive and a default First Person Char actor:

    • Add a SceneCaptureComponent2D in the FPChar linked to one of the Motion Controllers.
    • Add a cube linked to the same controller, re-escale till you get a rough smartphone shape :sweat_smile: and apply a material to it with the render target linked to the SceneCaptureComponent2D. Now you have a smartphone filming stuff in VR that you can actually see the stuff it’s capturing in VR :joy:
    • Add a camera in the world
    • In the Level Blueprint, add references to that camera and the actor char.
    • Use the Tick node to update the camera’s location and rotation to the same value you’re getting from the character’s SceneCapture
    • Add that camera to the Sequence Recorder
    • Hit “record” button, start playing your game in VR and recording stuff with your fake smartphone

    In the end you’ll have a sequence with a motion captured camera. Then you can play it or compose it with other sequences, etc…

  3. rabellogp says:

    Hey, guys.

    A couple of pictures of some lighting tests that I’ve been doing:

    I’m trying to use Lighting Scenarios so I can keep 2 or 3 different lighting situations in that same main level. The drawback is that the amount of RAM used will increase a lot since the lightmaps of all the Lighting Scenarios are loaded at once with the main level.

    The other way around would be keeping 2 or 3 copies of the same level with different lighting setups but that workflow is a nightmare when you’re constantly updating the project.

    For each Lighting Scenario level I have different:

    • Direct light (sun light)
    • Sky light
    • Sky dome with different EXR sky texture

    I’ll be posting the final footage of that project soon.

Continue the discussion at talk.ronenbekerman.com

12 more replies

Participants