Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Syndream — Built with our self-developed engine for 10,000+ players

A topic by MB Charlotte created 61 days ago Views: 206 Replies: 5
Viewing posts 1 to 6
(3 edits)


This engine completed its core features in 2024, and its biggest highlight is probably the ability to support tens of thousands — even millions — of players interacting in real time.

By March 2025, we had already finished our global test. But the content was pretty limited — players could run around, say hi to each other, and chat. That’s about it.

This time, we want to expand the gameplay.

So! I’ve decided to divide players into two factions. They’ll be able to fight each other, and the more enemies they defeat, the more military points they earn. Players can also see which areas on the map are controlled by their faction.

The goal here is to prove that large-scale MMOs with tens of thousands of players aren't some impossible dream. Scaling player numbers shouldn't be a problem — in fact, the more players there are, the better your faction can grow, right?

If every channel or world keeps capping the number of online players, what happens when player counts drop? There's no one left to play with. That’s the real issue.

Syndream is here to break that limitation.

A true MMO should be a scalable world where players can join freely, without artificial limits.

(1 edit)


We’re planning to start the public test tonight, and I hope the budget can hold up for a full 48 hours.

I know the hack-and-slash animations on the frontend aren’t super smooth right now — but with a one-week development timeline, this is as far as we could push it for now. If we get more resources later, we’ll definitely revisit and polish the frontend experience.

Whether you’re on the Red Team or the Blue Team, sticking together is the best way to rack up military points!

Currently, characters regenerate health slowly over time — so if you’re about to get sliced down, remember to run for it!

If you have any feedback, feel free to share it in the survey!

https://docs.google.com/forms/d/e/1FAIpQLSd3P4rd7MI3C8MqjFaUMkDiiJ6O78U5LP4F-PRZ...

Play:

https://demo.mb-funs.com/

It’s possible that the device was running on lower specs, which led to memory buildup and eventually caused crashes. We’re planning to switch to a different version to address this issue.

Since testing opportunities like this are rare, we decided to also try out some existing plugins... and the results were quite surprising. I might write a separate article to share our insights later.

2025.7 Global Test: Adjustment Summary

After deploying the system to AWS, we noticed an abnormal increase in memory usage within the first few minutes—an issue that didn’t occur during previous tests.

Our initial suspicion pointed to the underlying network layer’s memory handling for packets. Originally, the system was designed for internal use, requiring application-level developers to manually split large packets into fixed sizes before transmission. However, as we transitioned the system for external developer use, we re-evaluated this design and found it unfriendly. We revised the behavior to support packets of arbitrary sizes: when a packet exceeds the preset size, the system dynamically allocates extra memory and releases it immediately after sending.

We then tried introducing jemalloc to mitigate fragmentation. However, memory usage increased at nearly double the original rate. After two consistent test results, we concluded this approach was ineffective.

During further debugging, we considered another possibility: insufficient compute resources leading to event backlogs and memory buildup.

Assuming one character requires one unit of compute power, the system needed approximately 300,000 units. At the time, only 10 logic servers were running, meaning each server had to handle the logic for ~30,000 characters—likely exceeding the capacity of a single core.

We added 4 more logic servers and observed the system. It has now been running stably for over 30 minutes with memory usage below 2%, suggesting that the root cause was a compute bottleneck.

Initially, we were misled by top, which showed overall CPU usage below 40%, giving the false impression that resources were sufficient. What we missed was that each host only ran 1 logic unit and 2 network units. When the logic unit was overloaded and network units idle, overall CPU usage did not reflect the true bottleneck.

This experience highlighted the need for per-unit CPU usage metrics going forward. This will help identify bottlenecks accurately and trigger appropriate alerts. Under high load, pairing each logic unit with a dedicated network unit may provide better hardware utilization.

In the end, we ran the system using 14 c7i.xlarge instances.