task[2]
0344034.
BDCM
.Advanced Interactive Design
::task[2]
task[2]: Interactive AR application
todo:
- Create a filter in Spark AR Studio
process:
I really like AR effects that track the world around you. I'm still in my prime obsession of gyroscope-driven AR experiences, so I really wanted to do something like that. In thinking of an idea of what to make, I kinda imagined, what if I could just, draw something out in the background. What's the clearly superior program to do drawing in? Photoshop Procreate Medibang Ibis Microsoft Paint! What if I implemented an AR MS Paint?
Why I used the API over patches
Painting seemed deceivingly easy. I just needed to connect up a few patches to make a ParticleSystem
move according to the user's touches, right? Unfortunately, I was forced into math. You see, the coordinates gotten from touches, are in screen space; I needed world space coordinates. Not only did I need world space coordinates though, I needed coords that were offset a distance away from the camera, & which still corresponded to the user's touches when rendered on the screen.
This meant trigonometry. God damn it. It also meant I had to use scripting instead of Spark's, surprisingly resilient, visual programming paradigm. God damn it.
Fortunately, it wasn't in Lua or some other forsaken scripting language (Roblox, looking at you). Natively, it seemed to run JavaScript, & it even supported TypeScript. Things seemed to be up & up. However, Looking into their docs, they seemed to be doing things, a little weirdly.
- They used default imports
- Annoying as it breaks Intellisense auto-imports.
- They used modules with Sentencecased lettered names
- Convention is snake_case or kebab-case
- They used a custom
tsconfig.json
, which is generated on first-script creation, that referenced\%temp%/Facebook/Spark/cache/Scripting
- Convention is just dumping types into the local node_modules directory, or even installing a global npm package
- Annoying as it overrides any custom config I'd want, like
strict: true
- They only resolved files that were imported into Spark
- Annoying as I had to manually add every file I created into spark, if I'd want to import them (which I probably did if I created them...)
- They didn't support folders in the UI inspector
- Annoying as I had no way to organise my scripts even if I wanted to.
All that heavily influenced how I approached the scripts. First was having a clear separation of modules & scripts with side effects (ones that actually controlled behaviour). I chose the @
prefix for such files, with no reason other than it's a wide enough character & that I chose it before for unrelated projects. Then, to separate them, I sorta implemented namespaces in Java/C#, by fully qualifying them in the filename. Not that effective, as I'd still have to sorta name the individual exports according to what they do, but eh. Also, if anyone asks, yes, I'm on an OO cleanse, that's why no native classes or TypeScript namespaces. Don't ask.
With the general way of writing down, I started actually implementing the logic.
Painting logic
Here's how I approached the problem:
Since the camera has focal dimensions that can be arbitrary, based on the device's own hardware, I needed to use that in my problem. If I were to project a ray from (0,0), I could theoretically use distance * sin(angleRadians)
to get the final coordinates of a plane with the distance of distance
from (0,0). Getting angleRadians
was an atan
away too, since I had the dimensions of the focal plane through Camera.focalPlane
. I know this all sounds obvious now, but this took VERY LONG at 3AM. Repeating that for both axes gave me (x,y)
.
Sprites
In order to get the necessary sprites for the UI, I used a Windows 2000 WASM emulator. In there, I opened up MS Paint, resized it, & took a high quality screenshot. For the dialog, I used VBScript's MsgBox
subroutine to spawn a small dialog & screenshotted that. After getting the raw sprites, I took them into Photoshop & split them into 3-sliced sprites — top, mid, btm. For the dialog, the OK button was also separated, so it could be modular & used as a touch target.
In Spark, I would grab the texture programmatically, & create materials there, so that I could instantiate """UI""" through script. This mainly enabled me to resize the panels with the correct aspect ratio, & control whether they spanned, centred, or aligned to a side of the main canvas.
Colour
Initially, I intended to just bundle 6 colours — RYGCBM — plus 3 shades — White, Grey, Black — as picker options. However, when implementing them, I stumbled upon the slider. This would actually enable me to have full HSV coverage, by implementing a picker for hue, saturation, & lightness, then having the slider on each to control each attribute.
Dialog
Oh yeah, did I mention about the dialogs? The initial concept was to be able to create dialogs that floated around you, & that you could edit by using NativeUI.enterTextEditMode
. There were a few hurdles along the way. The first was that, whilst I could create text objects from script, I couldn't apply fonts programmatically on Instagram. Thus, I came up with a "textpool", created in the Spark UI, where there'd be text objects I'd borrow & return, editing their transforms & text content to match what I wanted. However, it was when I had completed everything, including a full programmatic layout system, the main blocker to the idea reared its ugly head. The blessed Instagram, again, did not support a capability. This time, entering text edit mode entirely. Did I mention again, I made a whole layout system to instantiate panels from textures, size the panels, apply transformations, & offsets. I know it sounds like I'm complaining, because I am, I spent a lot of time on it... Eventually, after considering implementing my own keyboard, I just settled for pre-bundled messages that would be randomly chosen.
final:
Project
Filter
reflection:
This was a ride. Initially, it was supposed to be a much smaller project, why was there a need to use scripting anyways? It was pretty painful at times too, as the documentation, & even the TypeScript types themselves were incomplete (all callbacks were typed as {}
, which effectively meant any
). Somethings documented just straight up didn't work either (configurating the slider to be of 'COLOR'
type would just throw, even if it was documented to work). However, with everything done, I'd say the things that did eventually work out to the better — how I eventually picked up on how Signal
s worked, how I could eventually write my own abstractions, how I knew to just keep the types open in another VSCode window to search through for something obscure.
Besides just learning how to make a filter in Spark, & how to make a filter in Spark using script, I also picked up on some more niche skills, like how this was the first time I used sliced sprites, as well as how to make clean pixel art through mosaic-ifying icons. Overall I don't think this was a project wasted, as much I want to feel like it, due to the time spent for not much in return.
I think I'll use a lot of the skills learnt here. Who knows, creating filters do seem fun, & there are a lot of things I haven't touched, like shader programming. Okay maybe not the shader programming can of worms, but I'll probably revisit explore the rest of the canned foods.
Comments