It has been quite a while since the last journal I wrote. Needless to say, the game has improved quite a bit since then. The AI has gone through several iterations, and now is getting to where we need it to be. The behavior tree model that RAIN uses is meeting our needs nicely for the time being, and the AI now behaves more or less like you would expect. We’ve been working quite a lot on movement and traversal options, including swimming, sidling, mantling and using ladders. This is all in the service of allowing more interesting puzzles, and they provide a more fluid feeling movement system in general. For the most part, our biggest challenge has been fixing the never-shrinking list of bugs. The bright side of this is that now that features have slowed down a bit and we’re fixing more bugs, we’re able to start creating more specific test cases and try out entire sections of our first level. Basically, we’re getting a little bit more into content than we were before, which is nice. Now, not everything is a grey box, and it’s starting to feel more and more like a game! For me personally, the next thing on my plate (besides the ever present bugs to fix) is to set up a wider variety of AI to see what feels good and what doesn’t.
The workaround is now fully in place, and we’ve managed to make things a good deal simpler. Now, our game has the following structure. Upon game start, a “Setup scene” is loaded. The purpose of this scene is to hold the singletons and the travelers. Basically, anything that there is only one of (e.g. the dialog database) or that is expected to move from scene to scene (e.g. the player gameobject) is placed in this scene. Also in the setup scene, is a script which waits till the setup scene is loaded, and then additively loads the specified scene.
Continue reading Christian: And from the Ashes
The holiday season was a great one, but now it’s time to finally catch up with all that we’ve been doing. What we’ve been doing the most of (from my perspective anyway) is cutting down the game into a more manageable scope. This is undoubtedly a good thing, as it means that what we have in the final product will be more focused and polished. So! lets take a look at what’s changed.
First, the dialog system has been modified to deal with more than two branches at a time. Originally, we wanted the player to be able to “interrupt” the conversation, almost like someone with a megaphone telling the conversation participants what to do. However, we’ve since moved to a more subtle dialog method. Our new method allows for more branching paths, and integrates items into the dialog system. With our new system, the player uses words that they’ve learned to respond to npcs. Sometimes these words are in fact representing items, so for example, the player could give a yam to an npc that asked for one. This gives us a lot more flexibility with our conversations.
The next thing that is substantially different is the AI. As stated before, the AI system was quite clunky to work with. However, the behavior tree system that we picked up also had a problem, in that it seemed to be incompatible with certain situations that cropped up quite often in our game. Namely, if an NPC wasn’t in the scene when it was loaded, but instantiated afterwards, the behavior tree would freak out. This left us with a conundrum. We could either give up this really nice system, or find a work around that worked with the package. Well, I’ve been experimenting with a workaround that seems to deliver more than it takes away. The workaround requires changing the way we’ve been setting up our levels, but in making the change, we can cut out large parts of the level manager, AI manager, and navigation code. It’s kind of painful to look at cutting out code that we spent lots of time on, but the reduced complexity is really quite worth it. Going forwards, things should run much more smoothly with the workaround in place.
A lot has happened since my last journal update, so this is going to be a little bit more summary than explanation. The decision was made that the AI we had was too clunky for our task, so we changed to using behavior trees. Behavior trees work excellently for both the size and scope of our project. Next up we started on redesigning the dialog system. Originally, the dialog was going to require only a yes or no from the player. However, we decided that a type of item based dialog would be best, so I have begun overhauling the dialog to allow for item-speech interactions, as well as more branching. Two of my main goals for the overhauled system are to improve the way the dialog database selects dialog, and to improve how dialog is created. We’ll see how it goes, but the first steps on the upgrade have gone well.
My biggest concern I had on implementing stealth mechanics is out of the way. Vision cones and sound ripples have been visualized and don’t look half bad for being a prototype.
With visualizations out of the way, I can go on holiday without my mind interrupting me to think about how to implement that when I get back.
As for the actual implementation of the AI stealth mechanics, that can wait until Christian is finished with the behavior tree. By the look of things, everything should be in place when I get back to begin implementing guard behaviors and whatnot in the overworld. Super exciting. I have been waiting almost a year to get to this point in development, so I’ll take being at this point in development as a holiday bonus.
On that note, I would like to thank everyone at Namespace Studio for the work that they have done this year, we would not be where we are now without all of your efforts. 2016 may have been shitty for everyone else this year, but Namespace had a great first year. So thank you, all of you and have an awesome holiday and begining of the new year.
Not much to say this week. Stealth research was fun and I thought I might as well give a brief overview of my notes since I don’t have much else to talk about. All stealth mechanics seem to break into essentially two categories, hiding and detection. Hiding is the name of the game and comes in many forms, behind objects, in objects, in shadows, in disguise, in-visible(hahaha) and you get the idea. Detection happens whenever the enemy AI is alerted, whether that happens by seeing, hearing or smelling you or something you trick the AI into thinking is the player character. All of this is of course wrapped up nicely with clear communication to player of what the AI can sense and where the AI are located (not that this needs to be handed to the player from the beginning).
On that note, I am off to research how to code dynamic vision cones as that seem like the trickiest and most crucial part in getting this to work.
Until next week,
The AI for this game is an interesting beast. Right now, I can see that it has a lot of power and flexibility to it. AIs can prioritize food and attacking and other tasks. They have needs which they want to fulfill, and in essence they will be able to look a lot more lifelike than your standard JRPG AI. We’ve all seen the ones. They stand in one spot for the entire game, say the same 3 things, and wouldn’t move out of the way of a flock of dragons. Not so with out AI. However, there is one problem that I’m still working out, and it has the potential to make the AI useless. That problem is, they are very expressive, at the cost of being quite difficult to work with. Making sure that states function as intended in the AIs database, making sure that actions perform all of their specified tasks, and so on has meant that I’ve spent the better part of 4 hours just adding a single action. And the bugs for that action haven’t even been fixed yet. This has lead me to another large concern, which is getting the AI to work with the dialog system. I want the AI to be intelligent about engaging the player, but right now, the troubles I’ve had integrating just one new action suggest even more troubles when integrating a whole new system. I’m continuing to search for solutions to this usability issue, but while the AI is technically usable, it may need a much deeper overhaul to get to where I want it to be.
I’m done with level loading for now. The Json level has proved to be everything I need it to be at the moment, and our level one prototype still needs doing, so it is on to that. Almost immediately it became clear that the AI Manager would need re-working. So, what I’ve been working on is a new spawning method, that condenses the three dictionaries I had previously into one, while allowing for the placement of generic AIs (like enemies) along with the unique ones (like nemmed). So far it is going well, and I think this new system will be a lot more user friendly.