lim jia sheng,

.Design Principles



An active act of acquiring information regarding a subject matter in its natural setting.

It enables designers to see problems, ideas, & subsequently solutions.

Artwork from Observation

  • image-20210529220749609
  • image-20210529220915578

Artworks can be both stylized or realistic, up to the artist's interpretation of their observations.

project[2]: Sense of Place



  • Observe surroundings & record them using photographs or sketches.
  • Create a piece that is our own interpretation of a place, translated from the visuals obtained in the photos & sketches.
  • Write a 150-200 word rationale.


The examples shown during the lecture reminded me a lot of the concept of "liminal spaces" — a space where you have left something behind, yet you are not yet fully in something else. (Stole that definition from what seems like a cult blog, great read though xd)

  • Have You Been in a Liminal Space? – The Vigornia
  • Non-Space. The whole idea of a liminal space can… | by Cute_Noumena | Medium
  • Image
  • Image
  • Image

Figure 1.2.7, Liminal spaces from the web, n.d

The things in common include them being "passing" places, not enough to stand out in one's memory, but not forgettable enough to stay out of it. The weird lighting increases the impact of the image, but I'm not so sure on why (maybe because it merges with our own real/faux memories, like how a broken machine works best with corrupted data).

  • 20200918_021438
  • 20200918_021500

Figure 1.2.10, My own attempts at recreating the same vibe the night before moving out of my childhood home, 18/9/2020

I spent a lot of time in that room, which makes it double interesting when I'm able to recreate the same effect on it. What if I can recreate that, eerie, oddly soothing feeling with the place freshest in my memory — my current room?

I went on to find a little more art that encompassed what I was looking to create.

  • img
  • img
  • img

Figure 1.2.14, Ferdinanda Florence's various works, n.d

An excerpt from an article about the works:

“The spaces I’m painting are (or were) as real as ever, but there is a surreal element that is slipping into the compositions. I’m stripping away more of the things that might offer a reliable sense of location, creating somewhat unmoored perspectives as I try to work my own, unreliable internal compass.”

I thought, that was an awesome way to describe it... Removing the sense of """location""".

With the general direction down, I went on to do some research on how I would execute this whole shebang. I really wanted to do oil paint, but total lockdown starts in 2 days & I'm not trying to file for bankruptcy, thus I decided I'd do it in 3D.

While stumbling around, I found this discussion about achieving an oil paint style in Blender's rendering engine, and within it was this:

A not-really-realistic representation of an oil painter's rendering of DOF, 10/2021

Figure 1.2.15, A not-really-realistic representation of an oil painter's rendering of DOF, 10/2021

Liminal spaces don't usually have DOF, but I might be able to abuse that effect along with some blending to reduce the "placeness" of the place.


I noticed in Ferdinanda's works that the lighting was made out of a combination between soft & harsh shadows. I took a a few images around my room with (relatively) simple geometry so I could use them for lighting studies.

  • 20210530_145941
  • 20210530_150053
  • 20210530_151653

Figure 1.3.4, Lit places around my room, 28/5/2021

Sketch trying to dream-ify sharp shadows, 28/5/2021

Figure 1.3.5, Sketch trying to dream-ify sharp shadows, 28/5/2021

(Partial) sketch trying to combine both sharp & dreamy lighting, 28/5/2021

Figure 1.3.6, (Partial) sketch trying to combine both sharp & dreamy lighting, 28/5/2021

With an idea of lighting in place, I went forth to trying out some composition stuff.

  • 20210530_010325
  • 20210530_010434

Figure 1.3.9, Wonky angles of my room, 28/5/2021

Unfortunately my phone suffers from the notorious disease of dumb-caveat-someone-should-be-mildly-reprimanded-for... I can't enable flash for the wide angle lens. Thus, you'll need to imagine the lighting for now.

In order to get the lighting itself though, I'll have to remake the entire scene in 3D. The reason is that there aren't enough bits coming from a phone sensor. Through rendering, I'll be able to get a LOG output with OpenEXR, & have a full 32-bits to work with, as well as depth maps to accurately paint depth.

With the technical reasoning out of the way, I went ahead with it. The first step was inherently to measure things out to keep the proportions relatively the same. To do that I used the dimensions of a tile (30cmx6cm) to extrapolate x & y, and a simple tape measure (+ a phone app for places I can't reach) for z.

// sections are parsed sequentially from 0 to dimension length, something like: foreach z(foreach y(foreach x))

    room: {
​        3d sections: [
​            [612, 366, 272]
​            [72, 264, 272]
​            [342, 276, 272]
​        ]
​        common: {
​            archway: [90.5, 18, 223]
​            archway pillar: [4.5, 18, 272]
​            wall decorative bottom parts offset: [2, 2, 0]
​            divider: [96, 2, 65.5]
​        }
​    }
​    closet: {
​        total: [56, 255, 272]
​        delimiter thickness: 2.5
​        z sections: [
​            9.5
​            ——
​            35.5
​            ——
​            35.5
​            ——
​            ——
​            181.5
​            ——
​            9.5
​        ]
​        y sections: [
​            6
​            45
​            ——
​            ——
​            69
​            ——
​            ——
​            119
​            6
​        ]
​    }
​    bed: {
​        total: [182, 184, 55]
​        z sections: [
​            32
​            21
​            2
​        ]
​    }
​    bedside counter: {
​        total: [29, 178, 78]
​        delimiter thickness: 2
​        y sections: [
​            ——
​            60
​            ——
​            40
​            ——
​            66
​            ——
​            6
​        ]
​        z sections: [
​            ——
​            74
​            ——
​        ]
​    }
​    shelf thing: {
​        total: [33, 60, 148]
​        delimiter thickness: 2
​        z sections: [
​            12
​            ——
​            32
​            ——
​            32
​            ——
​            42
​            ——
​            22
​        ]
​    }

2/3 dimensions of measurements, 29/5/2021

Figure 1.3.10, 2/3 dimensions of measurements, 29/5/2021

Progress in making modelling the 3rd dimension, 30/5/2021

Figure 1.3.11, Progress in making modelling the 3rd dimension, 30/5/2021

Progress of putting in details, 30/5/2021

Figure 1.3.12, Progress of putting in details, 30/5/2021

Making stuffed animals, 31/5/2021

Figure 1.3.13, Making stuffed animals, 31/5/2021

The world's wrinkle-est shirt, 30/5/2021

Figure 1.3.14, The world's wrinkle-est shirt, 30/5/2021

I actually managed to model every single prop used in the scene instead of pulling them from random sources. This gave me a super painful time modelling, but surprisingly painless time texturing, as I didn't have to worry about models which were pre-decimated, too high res, or even just broken.

Then, with the 3D out of the way, I left it to simmer render overnight & continued the process with compositing.

The raw 32-bit EXR render mapped to an 8-bit JPG, 31/5/2021

Figure 1.3.15, The raw 32-bit EXR render mapped to an 8-bit JPG, 31/5/2021

Even without any post-processing, it's definitely radiating some sort of eerie-ness.

Creating smoke from z-depth, 31/5/2021

Figure 1.3.16, Creating smoke from z-depth, 31/5/2021

After a long process of adding distortion, DOF, & fog, I had a first version:

First version, 31/5/2021

Figure 1.3.17, First version, 31/5/2021

The main problem here is the harsh edges of where DOF should be & shouldn't be.

Distortion & DOF being cut off, 31/5/2021

Figure 1.3.18, Distortion & DOF being cut off, 31/5/2021

To solve the DOF, I could re-render the whole thing with DOF enabled natively in Blender, but the hours that that would take would be stupid. This is because Cycles Render in Blender renders things in "samples", & every sample takes a defined amount of time. The higher the sample count, the lower the noise amount & the higher the image light bounce accuracy. With native DOF, it complicates things for the renderer, causing it to take longer per sample & even require more samples to achieve the same noise ratio. All that doesn't even address the problem of the broken distortion which can't be replicated natively, & will require compositing and subsequently the z-depth map.

After scouring many lands for the solution, I ended up just experimenting on my own, & found that if I just cheated, applying some blur to the edges of the depth map, things from far would seem all hunky dory.

Blurred edges of depth map, 1/6/2021

Figure 1.3.19, Blurred edges of depth map, 1/6/2021

So with that out of the way, I could finally wrap this all up. (final)


  • 2/6/2021
    • "Powerful" execution of the topic at hand.
    • Made Dr. Charles sad (sorry xd).


there's no room, 1/6/2021

Figure 1.4.1, there's no room, 1/6/2021

It's my room. It's the place where I do my things. It's the place where I feel my things. There are good & bad days. Sometimes I even remember them. I'm not sure which of those days are which though. They blur together a lot. Come to think of it, Why does it seem like there has been more bad ones? It could be something in the air. It could be something in the closet. It could be something in my room.

440 days since the first MCO. Many things have happened, a lot of it here in where I've been clumped up in. I've gained new understandings of humans around me. I've lost enough weight to "outgrow" my pants. I've met many new people & lost many new people I've met. Somehow though, things still feel static. I don't think it's too far fetched to pin that feeling onto the place — the constant throughout all the events. No major anchor points to remember, no minor nuances to forget, suffocating on the same air. I need to touch some grass.


This was actually kind of fun. Things took way too long, like all 3D scenes. Given the time constraints though, I'm still super proud of what I managed to create. The main thing I took away way the power of environment in manipulating the mood of a viewer, & how by emphasizing our own perceptions of said environment, we can push that potency on to everyone else (& make it their problem as well). Overall, I know I'll definitely pay more attention to surroundings, whether its natural or artificial; around me or in front of me.