The brief for this was to present a ML model (or a potential one) and how it would work in practise. I chose a autonomous self driving dataset and how it would work in an Apple environment
Shelly Galpert
Posts by Shelly Galpert:
Spatial One final presentation
The brief for this unit was to create a concept for a visionOS app. Our team created this:
Coding 1 final:
Project Brief: Create a apple VisionOS app with 5 tabview Pages
Sample code that I wrote:
(SwiftUI ContentView)
import SwiftUI
struct ContentView: View {
var body: some View {
TabView {
ActionView()
.tabItem {
Label("Action", systemImage: "bolt.fill")
}
ComedyView()
.tabItem {
Label("Comedy", systemImage: "face.smiling.fill")
}
DramaView()
.tabItem {
Label("Drama", systemImage: "theatermasks.fill")
}
HorrorView()
.tabItem {
Label("Horror", systemImage: "moon.stars.fill")
}
SciFiView()
.tabItem {
Label("Sci-Fi", systemImage: "airplane")
}
}
}
}
#Preview {
ContentView()
}
Altered States
3D + Visual Effects Collaboration
Miro Board:
https://miro.com/app/board/uXjVNm6sh8Y=/
This is a collaboration with Abby Turner, Ari Flor Barco, Ziheng Li, Amber-Rose Liu, & myself.
it is a Visual Effects short film made using a mixture of Nuke, Autodesk Maya & Adobe Premier Pro.
My part in the film:
I filmed Abby Turner in various positions at the Tate Britain, referencing the storyboard she created based on our combined ideas. we put that raw footage into one film.


I modeled the statue based on a reference and character design based loosely on the reference.


Ari Flor Barco then re-meshed the statue to make it more coherent and I retextured it with substance painter, adding a displacement map so it would be more like the original reference
I animated this scene of the footage.
I rendered and lit the paintings that Ziheng Li had sculpted the frames of for the painting scene, and Amber-Rose Liu had digitally painted. I then used tracking and rotoscoping to render it fully.
Final Composition: