Meshy’s latest PBR 3d model generation brings huge opportunites to dynamically generate virtual experiences tailored to users preferences.
This is an experience/game I made that uses Meshy for all visual 3d assets. I created the game, designed the assets using prompts, and designed the modular system for making them interactive and performant.
It is meant to show an example of how experiences could be tailored to the user in realtime with user input and feedback.
It is also meant as an example of how individual static assets can be automated into greater categories of games that AI can design and iterate on by itself.
By connecting the pieces with dynamic content – cables, fog, particles, light trails, subtle animations, it makes static assets really come together in a grounded world.
(If the images are stuttering, wait a few seconds for them to all load)
I started with this passive world with layers in height

But i wanted to push the question of what If I could ask my game to become a different environment, like under the ocean or in space, and it could react. This is a demo of what that future could look like:


Environment assets are generated based on modular pieces that Meshy generated with some creative prompting. It also generates things like “ideal spots for cables” and lighting which it bakes into lightmaps.

Each piece was individually designed using prompts and images.

Vehicle AI has a bunch of different sized vehicles which are populated by Meshy assets based on theme as well:

On the “ground” level I was playing with Meshy’s animations for e.g. walking characters.
Environments can be generated in realtime based on seeds:

This is all a multiplayer networked experience as well. You can see a server and 2 clients presenting the same simulation here, and presenting the same theme which anyone can change:

Currently it’s more of a social lobby, with more co-operative features being prototyped, for example this idea related to deliveries or influencing traffic patterns:


This currently deploys performantly to many different device categories as well. It runs on PC, and WebGL so it’s easy to send someone a link, as well as on phones. On mobile it’s a nice effect to use the gyro to look around:

I’d like to explore further:
1) Hook up Meshy’s API directly into the app so let users input their own theme in realtime based on this modular design.
2) Explore how LLM APIs can be used for AI to iterate on gameplay by itself (and also open the door for users to further design their own game within the game).
3) explore how this can be adapted for mixed reality.
Please contact me for a live demo or to discuss further.