Blog
Real-Time Visual Synth Driven by RP2350
RP2350 is the dual-core microcontroller at the heart of Raspberry Pi Pico 2 boards. Although the RP2350 has two identical Arm Cortex-M33 cores, it is usually programmed in an asymmetric multiprocessing (AMP) style, which doesn’t provide a shared task scheduler (e.g. a la FreeRTOS SMP). It’s still simple to divide and pin different tasks onto either core and use the two fast hardware FIFOs for inter-core communications.
Recently, I acquired a 1.3” 240x240 LCD made especially for Raspberry Pi Picos. It can be attached directly to the board pins, where the command and data signals are sent to its ST7789VW control chip via SPI and the joystick and 4 button inputs are received via 9 GPIO pins. This provided a chance to test how far I can push the microcontroller in creating a real-time visual synthesizer.
With help from generative AI, I created a “plasma effect”, which is generated by combining several sine waves over a 2D grid and animating them over time. The rendering happens using a single frame buffer. For each pixel, its coordinates and distance from the center are fed into sine functions, and the results are added up to create smooth interference patterns. A time parameter continuously shifts the phases of these waves, making the pattern flow and evolve. Another phase offset is used to cycle through colours for an extra ethereal feel. I programmed the joystick to shift the colour phase and change the animation speed, and the 4 buttons to switch between 4 presets for plasma parameters.

You can find the source code on GitHub. The LCD vendor provides a C SDK shared between multiple products. I extracted the part specific to my display and changed it to use a 240x240 frame buffer for rendering, before sending every frame to the display.
At first, I used a single core to render each frame in memory and then send the entire frame via blocking SPI. The rendering uses floating point calculations, which are expensive, even with RP2350’s single-precision FPU. I didn’t make any attempts at optimizing the rendering code itself (except creating a sine look-up table upfront), as I was curious how much faster/smoother the animation can get if the tasks of rendering and driving the display are divided between the two cores. The rendering represents any CPU-heavy graphical calculations.
The single-core demo got me to a measly 3.3 FPS, which is too choppy and unacceptable even to a non-gamer! I then separated rendering and writing into the display to run in parallel between the two cores. Once the frame is ready, the renderer sends a signal to the display core to shovel the data out. Typically, writing into and reading from the same frame buffer could create “tearing” glitches, but with the relatively slow-moving and flowy animation, it wasn’t an issue. The dual-core demo reached 5.5 FPS, which feels more like an animation even if it’s still choppy.
To further play around with my Pico, I could be using Direct Memory Access (DMA) or Pico’s unique Programmable I/O (PIO) to speed up the display driver. Currently, the renderer is much slower than the driver (equivalent to 8.5 FPS), so even using CPU-free solutions to move pixels won’t be enough to speed up the animation. Perhaps splitting the rendering into the 2 cores to run in parallel could help, but we’re then gradually getting into why GPUs exist in the first place!
Sonar-Powered Creepy Halloween Ghost
I recently bought a Raspberry Pi Pico W, which comes with a Bluetooth module, along with an HC-SR04 ultrasonic rangefinder. I wanted to hide it inside Halloween decorations and program it to make a creepy ghost sound that intensified as my child’s Halloween party guests approached it. The children might have already aged out of being scared by it, but it was a technical success after all. I’m sharing the source code and some of the details here.
The ultrasonic rangefinder works by emitting high-frequency sound pulses and measuring how long it takes the echos to return from nearby objects. The emission and measurement has to be manually driven by the microcontroller. The microcontroller’s job is to measure the distance regularly and if someone is standing within the range, produce a creepy ghost sound; a sine wave within 300-600 Hz with some extra low-frequency modulation to give it a “breathing” effect.
Circuitry
The rangefinder requires 5V and Ground that could be easily obtained from Pico’s VBUS and GND pins. Its TRIG pin, to drive wave emissions, can be controlled by a regular GPIO pin (GP2) as 3.3V is enough, but its ECHO pin outputs 5V. In order to prevent the rangefinder from frying the board by sending 5V into the receiving GPIO pin (GP3), I use a simple voltage divider using a 10K and a 15K ohm resistors. ECHO is separated from GP3 by 10k ohm, which is in turn pulled down to GND by 15k ohm. This brings down the voltage to about 3V, which reliably registers as high for a GPIO pin.
There isn’t anything else to the circuit besides powering the board via USB.

Source code
Instead of using the standard C/C++ SDK, I opted for Arduino-Pico v5.3.0. You can still program the board in C++ (Arduino styled), but it comes with a convenient Bluetooth driver and the IDE makes debugging via the serial port simpler. The entire source code is inside a single file.
The setup defines the I/O pins and makes a connection to the Bluetooth speaker as an A2DPSource. For connection to the Bluetooth speaker, I specified my speaker’s MAC address for reliability and simplicity. The main loop is in charge of driving the rangefinder and calculating and streaming the ghost sound, so to keep it tight and performant, I pre-calculate the sine function for 256 values (with linear interpolation between them) during the setup and store them in a look-up table.
The main loop polls the ultrasonic sensor by sending a 10-15 micro-second TRIG pulse and measuring how long the ECHO pin stays high. From this round-trip time, I compute the distance. The results are surprisingly accurate. If an object is within the range (15-150 cm), it starts generating a sine wave, whose frequency (300 to 900 Hz) and amplitude are a function of distance. This creates the illusion that the “ghost” is screaming harder as you get closer. As the pure sine wave can sound a bit artificial, I add a low-frequency oscillator to the mix. I maintain two separate phase accumulators; one for the main tone and one for the modulation, and read both from the lookup table. I apply smoothing and clamping functions to avoid clicks or harsh transitions. This is the part where generative AI and a few iterations helped me get the best results.
The audio output is then calculated at the sample rate of 48KHz. The left and right channels have the same value (mono) and I use a 2048-frame buffer for streaming the audio via Bluetooth. Raspberry Pi Pico is not exactly a high-performance microcontroller, but it could handle driving the sonar sensor and Bluetooth streaming in the same loop with occasional buffer underrun (resulting in clicks). To avoid that, the second core could be used to separate the sensing and streaming tasks, and ensuring the streaming takes priority over sensing (no major change in distance is expected anyway).
Serverless Photo-Sharing App Using Amazon Web Services
As the self-appointed IT director of my new family, I was tasked with finding the best solution to easily upload, back up, and share pictures and videos of our newborn daughter. While there are a myriad of cloud services that a normal person would go for, I didn’t want to rely on them to safeguard our most precious moments. To be honest, I was also itching to create a serverless app on AWS, without committing to much cost or maintenance overhead.
AWS has a host of relatively cheap services that makes creating small serverless apps easy. The JavaScript API allows you to keep most of the logic in your browser, where you can access AWS directly or through a proxy (e.g. API Gateway). I’ll explain how I used them:
-
Cognito for authentication: The main users of this app were my family and friends, so I only had to worry about unintended mistakes as opposed to malicious abuse. This allowed me to create pre-defined users for different roles (e.g. admin and visitor), and use Cognito to authenticate them. The JavaScript API lets you safely hard-code the public codes in the web page, and log in using a password only.
-
IAM for authorization: Once users are authenticated, IAM should give them minimal privileges to do their tasks. For example, I gave file upload access to admin users, but only read access to visitors. The Principle of Least Privilege prevents users from wreaking havoc. IAM didn’t have the finest-grained access levels, but for a trusted set of users, that should be good enough. For more flexibility in authorization, of course, it has to be done on a proxy or web server.
-
S3 for storage: Amazon’s simple storage is truly simple to use. I used it to store media files, thumbnails, and the static website assets. You may make the static site public, but put media files behind Cognito. The nice thing about S3+Cognito is that you can use the Cognito token in the S3 URL and access it in your website as you normally would with hosted images.
-
DynamoDB for database: The list of galleries and files, timestamps, captions, user access, and comments have to be stored somewhere. While S3 provides limited ability to store metadata for each file, the permanently-free tier of DynamoDB has enough capacity to store them in a few tables. The NoSQL (or rather schemaless) nature of the database makes it easy to quickly add new features.
-
Lambda for processing: A serverless architecture will not be complete without a function-as-a-service component! In this case, I used a S3-triggered function to create an entry in the database and process newly-uploaded images and videos. It could do anything as simple as generating thumbnails (Lambda is pre-packed with ImageMagick), or dealing with EXIF idiosyncrasies.
As for front-end, there wasn’t much needed for this app. Besides the good ol’ AWS JavaScript API, I used Bootstrap for easy styling, Knockout for two-way data binding, and Unite Gallery for media viewer. Unite Gallery is an open-source library with a few available themes and easy setup. However, getting videos to play on mobile and JPEG EXIF orientation proved to be challenging.
If I found time to improve the app, these areas would come up next:
-
CloudFormation: As of now, most of the configuration has been manually done and its persistence is at the mercy of AWS. I can use CloudFormation to formulate how everything was set up, so it can be retrieved if anything goes wrong. Amazon provides CloudFormer to create a template from an existing architecture, but it didn’t cover the bulk of my configuration around Cognito and security policies.
-
Automatic rotation: not all browsers can show the right orientation of cellphone images based on their metadata. I can use a Lambda function to automatically rotate images based on their EXIF orientation field.
-
API Gateway: Using the combination of API Gateway and Lambda, there would be no need to give users direct access to AWS resources. This will improve the security and may make the app useful for other people who can have a more serious use case for it.
-
Backup: A scheduled task that backs up media files and database records and writes them onto cheaper storage (e.g. AWS Glacier). For a more paranoid approach, I can also store them on another cloud service provider such as Google Cloud.
I’d be happy to get feedback on the design, and ways to improve the architecture.
Chilkoot Trail: The World's Longest Museum
Chilkoot Trail, a 53-km trail from Dyea, Alaska to Bennett, BC, was the main way for gold rush prospectors in late 1890s to get to Klondike River in Yukon. This slideshow contains photos of my 5-day hike through this historic trail.
Slideshow plugin by Pixedelic.
If you liked this, you may also like my Iceland Trip post.
JavaScript Online: hosting a static site cheaply and effortlessly
A friend of mine was trying to get hired as a software developer, so he asked me about resources to hone his skills and practice for programming interviews. I came across a few websites where you could solve programming puzzles online. Your code would be sent to a server to run in a sandboxed container, and you would shortly get the result. My friend was specifically learning JavaScript. That made me wonder how cheaply and effortlessly I can create and maintain a similar site for JavaScript only.
The most expensive parts of hosting are usually tied to the computing power of servers. No server can beat the increasingly cheap option of not having a server at all. If your website can work with static resources, you can store everything you need on a storage service, configure your domain, spend a few cents a month, and never have to break a sweat about how your website is running. But how much can you accomplish with a static website?
In the case of an online JavaScript practice service, you can get away with not having a server for many things:
- Core: The core of your service, running users’ code, can be delegated to their browsers. For this, web workers are a great feature that cleanly isolate potentially harmful code, and are supported in modern browsers.
- Assets: All assets on landing pages, blog posts, and other informational pages are static.
- Security: Users can see how you are running their code, see your test data for programming problems, and reverse-engineer them. In this case, let’s agree, it is actually serving the purpose of teaching JavaScript.
- Personalization: Local Storage can be used to store the history of their problem-solving. This doesn’t survive a browser data deletion or moving to another device, but oh well!
- Community and Engagement: I haven’t added any feature of such nature yet, but there are free or cheap services like Disqus or SumoMe that can add comments or email collection widgets to your static page. I’m not aware of any service for a leaderboard, but I’m sure if the site becomes popular enough, I can roll my own AWS Lambda script to take care of that.
In order to create the site, I’m using a Node.js script. Jade templating engine, Markdown language, Uglify-js, and Express.js for a local test server have come in very handy. I’ve written an automated build script, where I only need to add a single JSON file for every problem, and it creates the whole page, adds it to the index and sitemap, and deploys new or updated files to Google Cloud Storage. Google Cloud Storage makes it easy to host a static website, but it doesn’t support HTTPS yet. I’m using Cloudflare’s free plan to add HTTPS, server-side analytics, a caching layer, denial-of-service protection, and even a widget to show warnings if a browser is not supported.
I might open-source this in the future, but for now, feel free to practice JavaScript online for fun or technical interviews!