Storing data as Railway logs

Building a Database Using Railway Logs as Storage

Before Railway's 2025 hackathon started, a Railway employee - Brody - gave me the idea of building a database that uses Railway's logs as a storage medium. I thought it was such a unique concept that it immediately became my hackathon project.

The Challenge: Making Logs Work as Storage

Using logs as a storage medium comes with some interesting challenges. First, logs are typically one-directional - there's no easy way of retrieving them unless you captured them ahead of time. Luckily, Railway's GQL API came to the rescue. I could retroactively fetch logs, and what's even better is that Railway's logs GQL API includes filtering features, allowing you to search logs based on specific parameters. This gave me everything I needed to build a functional database.

The second hurdle was storage limitations. Through testing, I discovered that Railway logs can hold about 80kb of data each before JSON parsing breaks. To work around this, I'd need to create a complex chunking mechanism - breaking data into smaller pieces and then reconstructing them later.

Going Dependency-Free (Because Why Not?)

I've always wanted to build a project using zero dependencies, and for some reason I figured now would be the perfect time to try that challenge. So for this hackathon, I didn't use a single external dependency - everything was built from scratch on top of NodeJS.

The Build Process

True to form, I started on Friday the 8th of August 2025 - two days after the hackathon began, because waiting 'til the last second is in my DNA.

On my first day, I built out the project structure and tackled the two most critical components: Railway's API integration and the server. I used the deploymentLogs GQL query to fetch logs with filtering. Friday went well, and I ended the day being able to store and fetch data based on an ID.

Saturday and Sunday were spent wrestling with log filtering. For some odd reason, the filters weren't working properly through the API, even though the exact same filter worked perfectly in Railway's UI search. This really scratched my brain, and I spent way too long on this problem.

On Sunday, I finally decided to switch to Railway's environmentLogs GQL query, which seemingly fixed the issue. In hindsight, I should've been using environmentLogs from the get-go because it spans across deployments, making my database persistent. Not sure why I didn't start with that.

That wrapped up Sunday for me, but knowing I had until Monday night to finish, I "speedran" my 8 hours of sleep within 4 hours and got back to work.

Adding File Uploads and Security

Monday brought the challenge of file uploads. The problem? File uploads require disk storage, but I could only store strings in logs. My solution was to convert files to base64 data, apply gzip compression, and store the compressed result. On read operations, I'd fetch the compressed data, decompress it, and convert it back to the original file before sending it to the client.

I also spent time implementing encryption - something I'd wanted from the start. After all, storing your data as logs isn't exactly the most secure storage method ever, and encryption goes a long way toward keeping data safe.

What I'd Do Differently

There are tons of features I wanted to implement, like a proper database client, but I ran out of time. If I could do anything differently in this hackathon, I'd probably have started sooner. Regardless, it was super fun building this!

Check It Out

This project has a Railway template here:
Deploy on Railway

Feel free to check it out on GitHub here!