Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
Tags

Deceiver

A topic by Helvetica Scenario created Dec 01, 2015 Views: 12,807 Replies: 115
Viewing posts 41 to 60 of 99 · Next page · Previous page · First page · Last page

Overworld

Here's how it works right now. First you spend a "hack kit" to gain access to a zone. It will start out being owned by someone else.

Next you deploy a drone to capture it. It will attempt to "auto-capture" the zone for up to 30 seconds. If the "zone's owner" interferes (i.e. the matchmaking server finds an online opponent to face you), then you have 10 seconds to accept their challenge and play a match against them. If you ignore the challenge, you'll lose the drone you spent.

Likewise, people can capture your zones at any time, and you'll have the option to interfere. The more zones you own, the more energy you will passively collect, even while you're not playing.

The overworld UI no longer fits nicely in a 300px gif, so here's a full screenshot:

The above is what the UI looks like when you're a member of a group, which allows you to play 2v2 matches.

New maps

2v2 matches require new maps in addition to a number of changes and new features. Ian knocked out a few, including this one:

And I finally finished this one from forever ago:

I'm showing the game at the Independents' Day festival this weekend, so I should have more playtesting data to work with soon.

(+1)

This game is really stunning! Interested to see how it develops. How long has the current version since you switched to your own engine been in development for? :O

Thanks! :) It's been about 18 months with the new engine, give or take.

Quick update: I am working on two massive projects that aren't ideal for screenshots, hence the radio silence.

One project is netcode. So far I've got a headless server building on Linux, a virtual connection going over UDP, and some nice serialization / bitpacking tools. My code borrows heavily from libyojimbo, which is part of the gafferongames networking article series.

The other project is porting the game to PS4. I finally got the dev kit and SDK set up; it was surprisingly straightforward. So far everything is building except for the one source file which contained all the OpenGL code. I'm slowly translating those GL calls to their PS4 equivalents. The PS4 API is pretty sophisticated and a bit daunting, but the documentation is fine, and overall it's nice to work with so far.

I'll probably continue to write some gameplay code to keep from going insane with low-level graphics and network stuff. I'd like to add more abilities, starting with grenades.

Netcode

Work on the netcode continues. It provides reliable ordered messaging under conditions of up to 25% packet loss, and it can now serialize the world state over the network. Bitpacking and zlib compression keeps the bandwidth down. The next project is delta compression and client-side interpolation. After everything is synced properly and "spectator mode" is effectively complete, then comes the tricky part: client-side prediction.

PS4

The PS4 port requires me to translate all the shaders from GLSL to Sony's proprietary shader language. I'm putting this on hold for now to focus on other features. If you're a PS4 programmer or you know someone who is, contact me! I would love to hire someone to work on this aspect. It's a very straightforward port.

Ability overhaul

A few things bothered me about the ability system. One is that because you could only spawn objects at your current position, it did not afford much room for creative expression. It was less a question of when and where to use abilities, and more only a question of when.

Furthermore, at least one of the abilities (sniper) worked differently enough that it broke the pattern of the other abilities. You entered "sniper mode", in which your next move fired a bullet instead of moving your drone.

I decided to switch all the abilities to this system. So now, you press a button switch to a certain ability, then hit the primary fire button to use it. If it's a spawn ability, the object spawns where you were aiming.

Decoy

This is a new ability that spawns a decoy drone, which confuses the enemy's UI and aggros all their AI units. Needs tweaking, but I think it'll stay in. Here's a VOD showing the development process for the decoy.

Health v10

I've always loved games with lots of health. I used to run custom Halo matches with snipers, infinite grenades, and health cranked up to 400%. No one liked it.

This game has been a further exploration into this tense, high stakes, no-respawn type of game, in the vein of Counter-Strike and Rainbow Six: Siege (which I have been enjoying immensely lately). The problem is, those games are team-based where mine is largely 1v1.

I've been trying to artificially inflate the playtime with more health. When it comes down to it, if your goal is to destroy the enemy, it's deeply unsatisfying to abandon a fight unless you're about to die. This leads to matches with one quick fight, as opposed to the multiple, varied skirmishes I'm going for.

So I'm going the more traditional route. One hit point, one shield hit point, and a small number of respawns. Yes, it's a compromise, but ultimately the goal I'm shooting for is an emotion, and this new system gets closer to that goal than the old one.

In order to make the respawns still feel meaningful, I plan to make them affect the metagame. Each drone you waste costs money, and maybe if you win a match, you get to take your opponent's leftover drones.

Rush mode

Another factor that encouraged overly quick matches was the gametype: so far it's been only deathmatch. I also realized that deathmatch modes never reach the same level of "tactical thinking" achieved in modes like CTF. So now I'm experimenting with a "rush" mode, which should be familiar if you've ever played Battlefield:

This is your basic attack/defend game type. The defender has to keep the objectives safe for a certain period of time. We'll see how it goes.

(+1)

Wow, I just discovered this devlog, and I'm totally stoked about your rendering style!! Also pretty impressed you decided to write your own C++ code in lieu of Unity.

Oh, and I was just playing Haunted Heist... very charming. Super satisfying audio!

-Zach

Thanks for the kind words. :) Yeah, don't do it my way... stick with Unity... it's better for your sanity. :)

(1 edit)

Another design overhaul

It bugs me to have two modes of gameplay (overworld and PvP) so explicitly delineated and separate. It makes the game lean too far toward the multiplayer category and makes the singleplayer story aspects feel tacked-on. I want the two to be truly integrated in a meaningful way.

So instead of abruptly kicking you out to the overworld, I'm just going to pull up the UI right there in-game while you wander around. This means that I'm cautiously inching back toward the old exploration mode. Which is great because I've been dying to somehow get my colors back in.

It will be more integrated than it used to be. Previously the game would completely reload the level, setting the meshes to be colored or B&W depending on the mode. Now it's just a render flag which I can toggle at any time.

So the idea is, you'll be in exploration mode, you'll enter a new area, find a control panel, and start capturing the area. You might capture the zone before the owner can defend it, or they might spawn in right away and stop you.

There's still a lot to think about, especially what happens if you lose. I think I want some form of permadeath for the player character.

Netcode

Progress continues. First I got positions and orientations synced in a naive, brute-force manner. Then I implemented delta compression so that only moving objects are sent every frame. My first try was less than perfect:

But I got it working. At this point the client was basically in spectator mode; there was no player input.

After synchronizing a few player configuration variables like usernames etc., I started sending player input from the client to the server. It's important that I don't have the server just trust whatever position the client says the player is at; a system like that would be hacked almost instantly.

I was surprised to see how accurate dead-reckoning on the server was, even with low float precision. I'm currently using dead-reckoning, plus the server accepts the client's exact position as canonical if it's within a small tolerance of where the server thinks it should be.

Here's how it works in practice. The white triangle shows where the server thinks the player is.

There are still tons of issues to tackle, but it's looking feasible. I also tested the game with my linux VM in NYC. It survived its first contact with real-world internet conditions, so that's encouraging.

Here's a dev stream recording of some of these netcode features being coded.

New culling effect

If you look at the old culling effect, you can see the inside of the geometry you're currently crawling on. I turned off back-face culling to make this work, and it looked fine. However, you could often rotate the camera and see all the backfaces of the outside of the level, which looked tacky. Plus, disabling back-face culling entails a significant performance hit.

So I re-enabled back-face culling and I black out everything you shouldn't be able to see with a cylinder. Works surprisingly well.

(1 edit)

GDEX

Got to run a booth at GDEX this past Saturday. Also gave a talk called Thirteen Years of Bad Game Code (full article coming soon). Made some new friends, caught up with old ones, had a great time. No pictures, sorry. I don't believe in pictures.

A ton of changes happened around GDEX. Some happened between the two days of GDEX, because I woke up in the middle of the night to write code for four hours.

Dash

Sometimes it's possible to aim beneath the surface you are currently attached to, like this:

The laws of geometry dictate that you can't shoot to the location you're aiming at without passing through the surface you're attached to.

Previously in this scenario, if you pulled the trigger, the drone would always dash, which keeps you stuck to your current surface, but might slide you closer to your goal.

This was confusing to people, so now it's only possible to dash if the place you're aiming is roughly co-planar with you. Otherwise, as you can see above, it just won't let you go.

Reticle

On a similar note, the reticle was also behaving weirdly. Consider the case of the player on the right in the screenshot below:

They're aiming at an enemy drone, but there's nothing behind them but empty space. Previously, the reticle would have been red, preventing them from going. I have to be very careful to prevent drones from flying off into space, which would definitely happen in this case if they hit the drone and the drone died.

However, since we're attached to the same surface as the enemy drone, we can dash and hit the drone without ever detaching from the surface.

It gets even more complicated though. Since the game is third-person, we actually have to do two raycasts. First we raycast from the camera straight through the reticle. Then we raycast from the drone to the position we got from the first raycast. Most of the time these line up in a way that makes sense to the player, but sometimes it can get confusing.

Long story short, the reticle code has like 17 special cases now. It's insane but it feels great. Now, if you aim at an enemy and pull the trigger, something is guaranteed to happen. This was not always true.

Shields

I've experimented with how and when to display shields from almost the beginning. For a while they were just a ghostly white outline, then they were a solid transparent color. More recently I added a fresnel effect. After adding animations I think they're finally good enough.

Culling effect

I spent a few hours trying to solve the problem of seeing through map geometry. It's a huge challenge because there's no way for the GPU to know whether a given pixel is "inside" or "outside" level geometry.

I tried doing a bunch of raycasts and generating clipping planes from those, but I encountered artifacts that would require sampling every single triangle in a certain radius around the camera and create a clipping plane for each one. Which is feasible for my low-poly maps, just not something I'm ready to tackle right now.

Batteries

I didn't know what to call these for a long time, but I think they're batteries. Anyway, they used to confer health, but I took that away in favor of the current, simplified health system. They felt less compelling ever since.

Now they also function as sensors, meaning they provide stealth and detect enemies.

It's more fun to battle over one of these now, because one of you might become invisible at any time.

Netcode

Last Friday at 4am the netcode entered a somewhat functional state. I was streaming at the time, so I sent the build to someone watching the stream and we were able to "play" together (sort of)!

I've since fixed most of the glitches in that video. There were also some issues with the reliable messaging, which I would have never discovered except that my laptop seems to be having major Wifi issues. It started dropping packets left and right. I increased some buffers, timeouts, and the extent of the sequence numbering, and it seems pretty solid now.

This specific game is challenging from a netcode perspective, because the main mechanic involves player characters bouncing off each other. The problem is, no two players experience exactly the same game state because of lag.

So I'm running the movement code on both the client and server. If things line up well enough, the server will accept the player's exact position as canonical. However, if things aren't exactly the same on the client and server (due to lag for example), things can get out of sync, and then the client has to awkwardly snap to where the server says they should be. I'd like to avoid that as much as possible.

There are a number of things I'm doing to address this. First I implemented lag compensation as described by Valve. As a drone flies through the air, the server rewinds the state of the world to 45 ms ago, or whatever the round-trip time is for that particular client. In other words, it rewinds to the state of the world as it appears to that client. Then it checks for collisions against this older state, rather than the current one. Time travel, basically.

Lag compensation isn't perfect though, and this particular mechanic (bouncing off players) is incredibly sensitive to minor deviations. If you raycast against a sphere at a slightly different position, the resulting reflection angle may vary wildly. To solve this, I'm quantizing the normal from this raycast, meaning I have a table of 18 vectors distributed around a sphere, and I pick the closest one to the normal where we actually hit the sphere.

But wait! It gets even more complicated. After choosing a reflection vector, it might turn out that we can't actually bounce that direction. For example, it might send the drone flying into space. I solved this problem a long time ago by doing a randomized series of raycasts starting with the original reflection vector and slowly expanding outward, stopping when a viable candidate is found.

"Randomized" does not sync well over a network, so that had to go. I'm now using a pre-determined table of possible reflection angles.

This still isn't working perfectly. The collision positions sync up to within 0.03m and the quantized normals sync up great, but somehow the code still chooses a different reflection vector between the client and server about 25% of the time, causing disorienting hitches. Still actively researching this.

Grenades

These came out just a smidge too bouncy at first:

They still need a lot of work, but I like where they're going. They function both as grenades and mines, so if no enemies are near, they just attach to a surface and wait.

Language

You may have noticed something weird about the screenshots in this post...

So the story is, yesterday I experienced either a stroke of inspiration, or just a regular stroke. Not sure which. At any rate, I spent the next four hours "translating" every string in the game to an imaginary language.

I blame Anthony Burgess. I recently read A Clockwork Orange and became enamored with the idea of fictional languages that are just close enough to English to be decyphered by an average English reader. I'm not sure if it's right for me though.

In his foreword, Burgess said Nadsat was born of his cowardice, created to obfuscate the pornographic nature of his novel. Hiding your choice of words behind a language barrier does smack of squeamishness, but I like the idea for two reasons. First, it's difficult to write a believable world that differs vastly from our own, yet features most of the same words and phrases.

Second, I find the added cognitive load of deciphering alien words draws me in to fictional worlds. At first I laughed at certain Nadsat words ("eggiwegg"?). This happens with any foreign language. My brain adjusted, and by the end of the novel I felt I could pass as a native speaker. That's a powerful tool to engender empathy.

Another argument against: I don't have the time or expertise to develop a pseudo-English language. So far, in this "trial run", it's basically a bastardized, poorly-understood version of Middle English.

I may consult with some friends who know more about Middle English. Or I might drop the idea completely.

(2 edits)

New edge rendering system

I was talking to a friend at our local gamedev meetup who's also doing a game with vector graphics. Rather than a traditional edge-detection shader, he renders the full scene normally, then renders the whole scene again, but this time with the fill mode set to render lines, and using a modified version of each model which only contains the edges.

I realized this was way simpler than the edge-detection I was doing, and it would allow me to do real MSAA on the edges. So now my importer executable generates a separate index buffer for each mesh, which only contains sharp edges. It uses the same vertex buffer. This system has two positive side-effects: first, because I disable culling when rendering edges, even if a wall gets culled out for visibility purposes, you can still see the outline of it. Second, I can now control which edges get highlighted directly in Blender.

Here's a glitch screenshot I took while developing this feature, which of course looked much better than the intended outcome:

And here it is working correctly:

The MSAA is still not perfect. I think the reason is that the depth buffer is not multisampled (and that's because it's a deferred renderer). I have to copy the depth buffer into the multisample FBO, taking the furthest depth sample from the closest ~4 pixels, to ensure that all 2-4 pixels of the line will clear the depth test.

Here's a direct comparison of the edge rendering techniques I've tried so far:

Netcode

If you've been following along, you know the game has these batteries dangling from physics-enabled chains:

Previously, I had to sync the position and orientation of every chain link across the network. When they were in motion, they constituted the majority of bandwidth. I tried serializing the state of the physics constraints at load time, then only syncing the position of the battery. The result is not perfect, but acceptable for a purely aesthetic feature. Bandwidth is now down to 100-200kbps in the worst case scenario.

Last time I talked about the difficult problem of drones attacking and bouncing off each other. I finally realized it's impractical to perfectly synchronize client and server via deterministic simulation. The server and client need to wait for some form of communication to occur between them before proceeding in a given direction.

Here's what I came up with:

  • Client runs all movement code locally.
  • Client detects that it hit an enemy and executes the code to bounce off them. Client notifies the server which direction it bounced.
  • Server is also running movement code locally. There are two possibilities: either it detects the hit before receiving the client message, or after.
  • Most likely, the server will receive the client message first. It caches the message and waits for up to 0.1 seconds for the server-side movement code to confirm the hit. If it never hits anything, the server gives up and sends the drone bouncing off in the direction the client said it went.
  • The process is similar if the server detects a hit before receiving a client message. The server caches the resulting bounce direction and waits for 0.1 seconds for the client message. If it receives the message, it executes the bounce according to the client's wishes. Otherwise the server forgets anything ever happened.

Physics chains and drone bouncing were the tough netcode challenges. The rest has been pretty straightforward:

  • The game calculates and displays the point you need to shoot at to hit a target. Previously this only worked in local games because on clients, the physics engine is overridden by the netcode, so it doesn't have any velocity values. I'm now calculating those missing velocities on the client by comparing consecutive state frames received from the server.
  • All abilities now work properly in networked games. The main challenge here was the ability to spawn minions. I have to sync the current animation, and the current timestamp inside that animation, for each minion.
  • Sparks, effects, explosions, controller rumble, camera shake, and other hit events are now synced across the network.
  • Shockwaves are now client-side only. Previously they were regular game entities, which meant their creation and deletion had to be synced across the network to ensure IDs lined up properly. Now they're just client-side effects, as they should be.

Bolter

This ability allows you to shoot bolts similar to the ones shot by minions. It's interesting because all the abilities are tied to the three-jump movement cooldown system. Spawn three minions and you'll have to wait for the cooldown before jumping again. The bolter is the only ability with no cooldown whatsoever. It's meant to be a rapid-fire weapon limited only by your energy resources.

Improved decoys

Decoy v1 was just a bad design. You could spawn a decoy in an obscure corner of the map, then run around the whole map without being spotted by enemies, minions, sensors, or anything else. If a decoy was active, you were basically invisible.

Now the decoy must be visible before it will confuse an enemy unit. If the decoy is hidden in a corner, it has no effect.

There is one exception: if you plant a decoy in view of an enemy sensor, that sensor will continually alert the enemy, "HEY! THEY'RE OVER HERE!" even if the decoy is across the map, and you're sitting right next to them.

Teleporter gone again

The teleporter has been added in and removed twice now. It's just not fun. Dear Future Evan: if you are tempted to bring back the teleporter a third time, DON'T DO IT.

Netcode

Until now, the netcode operated under the assumption that nothing happens until all clients are connected. Everyone receives the map data at once, and the reliable messaging always starts on sequence 0.

I need clients to be able to join games already in progress. The new process looks like this:

  • Client spams the server with connect packets.
  • Once the server receives this packet, it saves the current sequence ID as the starting sequence for that client. The client needs to receive every message starting with that ID.
  • The client receives these messages, but does not process them yet. It stores them in a buffer.
  • Meanwhile, the server is also sending map data via an entirely separate reliable messaging channel. Normally, all reliable messages are sent to every client, but the map data is only sent to this specific client.
  • Once all the map data has been transferred, the client processes the messages it cued up while loading the map, and notifies the server that it's done loading.
  • The client is now caught up to the latest sequence ID, and things proceed normally.
There are a ton of tiny but critical implementation details that come together to make this work. My first prototype worked okay in perfect network conditions, but quickly fell apart under packet loss. After a few hours of bug fixing, the connection process now works even under 25% packet loss.

Minion attack animation

Previously, when Minions attacked, they would just stare at you until a bolt projectile suddenly materialized out of their forehead. But no more! Now it materializes out of their hand. This changes everything.

New character model

Parkour mode is back, and with it, a new player model.

(actually it's just a modified version of the player model from my last game)

The neat thing is that it lines up perfectly with the Minion model, so I can play every animation on both models.

Slide attack

I'm experimenting with some interactions between the player and minions in parkour mode. So now you can slide into a minion to take him out:

Of course you have to gain a certain amount of momentum to make the attack effective. I'm still tweaking the movement code to make this feel good. One problem with my last game was that you could reach top speed just by holding W, and there was no canonical way to accelerate past that (although speedrunners found a number of exploits). I'm trying to fix that in this game.

Minion melee attack

Minions became awkward when the player got too close; they were still firing projectiles as if the player was far away. Yesterday I added a melee attack:

That's it for now.

Happy Thanksgiving! :)

Terminal

Each level starts with the player navigating to the top of the map and hacking into a terminal, which switches the game into spider drone mode. The goal is then to capture the map, exit the terminal, and move on. At any point, the current "owner" of the map may spawn in to defend their property. In reality, the game will try to matchmake you with someone who owns that map; it might be different every time.

Here's a very WIPpy concept of how the terminals will work.

In-game map view

Previously, the overworld was a separate map that had to be loaded, with everything that entails. I needed it to be accessible from anywhere in the game, so I had to refactor to make it remain in memory in the background.

Just wanted to pop my head in here to say that I finally finished the world's most complicated porta-potty:

(+1)

Hey, I've been checking back every now and then to see any new updates, and the game is coming along really well!

This week I worked on the primary method of transportation between levels: trams!

I've always had a thing for trams in video games. They evoke a feeling of progress and meaning.

I didn't know if they would even be possible at first; Bullet physics does not allow dynamic rigid bodies with triangle mesh shapes. It has to be convex, which would prevent the player from entering the tram.

Here's what I did in the end:

  • Created a box-shaped dynamic rigid body for the tram
  • Disabled collision between this body and the player
  • Created a static rigid body with a triangle mesh for the actual tram collision shape
  • Parented the static body to the dynamic one so that my engine automatically updates its position to match the dynamic body

Amazingly, it all worked.

This was the first version, trams 1.0:

Then I made the tram runners smarter, so they could accelerate, decelerate, and follow paths:

Finally I tweaked the model and added glass and animated doors.

That's it for this week. Will probably go back and work on the spider drone half of the game next week.

This week I've been catching up with tweaks and bug fixes after the last big feature push. Most of the core game loop is functional now, even in networked mode, although some sharp edges still need to be sanded down.

New stuff: there are now two levels with trams, and they connect to each other. It works surprisingly well. Here's the tram on the new level:

There are also collectibles now. These provide much-needed resources and give you an incentive to explore.

That's it for this week.

Cracking

It wouldn't be a cyberpunk game without hacking of some sort. The idea here is to slow you down when entering a game to allow more time for matchmaking.

I used Beautiful Soup to scrape 64 4x4 Sudoku puzzles from a website, then I randomly rotate the digits and flip the board to generate more puzzles. It's a fun little mini-game.

Dock

I've been fleshing out the first few levels. The first one also doubles as the title screen:

Tarzan

Rope climbing and swinging is in, although still a bit WIPpy.

New map

This is the third map you'll discover, if you count the title screen. Which I do.

Parkour animations

These work exactly the way they did in my last game. While climbing a ledge, the player physics body moves straight up, and then straight forward in a jerky fashion. While this is happening, I offset the model and camera so that the climbing animation stays rooted at the same position even though the player entity is moving. After the animation is done, I blend everything back together so the model and physics body are in the same position again. I believe this is similar to something in UE4 called "root motion".

Animated characters

I started out thinking this game would work the same as my last in terms of story. Branching dialogue choices in a simple text-based system, plus random notes scattered throughout the levels.

This week I finally realized a few things:

  • I mainly play action games. This is an action game. Things happen in action games. Reading text is not a great fit.
  • None of my favorite games have branching dialogue. You choose a story branch by performing an action, not selecting it from a menu.
  • Games are most compelling when gameplay and story coexist and complement each other. That's difficult to do when they're totally separate. Pre-rendered cutscenes, or worse, animatics, foster a clear delineation between gameplay and story. The best games do everything in-engine, preferably without taking control away from the player. In an action game, heavy amounts of text lead to the same problem.
  • I now have enough modeling and animation experience to pull off fully animated characters if I take a lot of artistic license and stick to a stylized look.
In light of all that, I'm fully removing the text message system. You will now search out and talk to different characters throughout the game. The sailor above took me about two days to model, animate, and script. It's a slow process, but the end result is so much better than seeing a new message notification in the corner.

Friends! I need your feedback.

I am considering renaming the game once again. "The Yearning" is still a good fit, but it's confusing, pretentious, and easy to forget. It's also impossible to infer anything about the game from the name alone.

I'm considering renaming it to "Skirr".

"Skirr" is the name of the city in which the game takes place. Reasons I like this name:

  • It means "to flee". The game is about fleeing the earth to escape an apocalypse.
  • It sounds like "scurry", which evokes the creepy-crawly nature of the spider-bots.
  • It doesn't have much competition on Google.
  • To me it sounds like an action/adventure title.
  • Taking inspiration from Astroneer, "SKIRR" in all caps looks acceptable and could help draw attention.

Reasons I don't like it:

  • Confusing spelling. People who hear the title spoken might think it's spelled something like "Scur".
  • Confusing pronunciation. People who read the title might think it's pronounced something like "Skeer".

What say you? Too confusing still?

Another title I thought about for a while is "Caligula", after a play by Albert Camus. "Caligula" is the name of the refuge planet in the game. Unfortunately there's already a "Caligula" game in development.

Apologies for so many rebrands, but I would rather nail the title than stay shackled to a bad one in order to minimize confusion. Besides, No One Knows About Your Game.

In other news, the dock is finally finished:

(+1)

Personally I think The Yearning was good enough, although SKIRR sounds more original and customised.

As long as the gameplay, visuals and sound are good, I don't think the name matters too much to be honest.

(1 edit)

Life stuff

I had my first anxiety attack on Tuesday! Feels like I've completed a gamedev rite of passage. I've been relaxing and hanging out with my family this week to try and get healthy again. Feeling much better now. Here's what got done before the break:

Hobo

This guy was supposed to look ragged, but his outfit was based on the ridiculously photogenic homeless man so it ended up very stylish actually.

He's one of the first NPCs you'll meet. He just talks to himself.

Aerial kills

You can now kill minions from above. I haven't done anything to align the animation yet.

Behind-the-scenes work

Lots of bug fixes and small changes. I refactored the scripting system so that scripts can be executed on both the client and server in networked games. But the biggest time sinks (and of course the biggest overall challenges for this project) are the AI and netcode. I'll still be doing fun story stuff and character models through the end of February for a vertical slice to show at GDC. After that, it's time to dive in to network infrastructure and a completely new AI system.

Animations

I keep adding animations one by one. At one point, Assimp decided to optimize the root bone of the player model out of existence. This obviously caused some problems.

(+1)

Projectile client-side prediction

All moving projectiles in the game are normal entities tracked via the usual interpolated transform sync system. This is fine for AI characters shooting at you, but it's incredibly frustrating when you are shooting projectiles. You have to wait for a network round-trip before the projectile shows up.

I often test netcode on localhost, where there is no lag. Since this feature is heavily dependent on lag, I took some time during the stream this past Friday to implement a buffer that simulates network lag.

I cranked the lag up to 300ms total round-trip time and fired some projectiles. The first problem was that, since the server took 150ms to register my "fire projectile" command, my target might have moved by the time the projectile got to it.

The solution works like this on the server:

  • Rewind the world 150ms to the point where the player fired
  • Step forward in increments of 1/60th of a second until we reach the present, checking for obstacles along the way
  • Spawn the projectile at the final position
  • If a target was hit during this process, delete the projectile and apply any damage effects

To the player, there is still a 300ms delay before anything happens, but the projectile will pop into existence 20 feet out, where it would have been if there were no lag. This makes it easier to aim, but it's still annoying to have no immediate visual feedback when you fire.

I thought about spawning the projectile on the client. The problem is, projectiles are entities, and the entity system is controlled by the server. If I spawned projectiles on the client, IDs would get out of sync and things would explode.

So instead, I made a new system for fake projectiles, totally separate from the entities. Actually "system" is too strong a word, it's just an array of structs. These fake projectiles live for up to half a second, and the client removes them in order as soon as the server spawns a real projectile.

Here's the end result running with over 300ms of lag:

Shops

You can now buy stuff at these special locations known as "shops".

Locke has a number of greetings he can give, which will have accompanying animations. I'm really starting to enjoy animation work! Actually had a blast making this:

Also, this thing is now over 50,000 lines of code

Viewing posts 41 to 60 of 99 · Next page · Previous page · First page · Last page