So I've gotten some test animations in play and the core of the player equipment / attacks stuff is in, it's mostly actually making the rest of the animations, hooking them up and building in the existing melee hit detection system now. Discovered some interesting quirks to how the Animator finds states by name/paths but worked around it in the end.
Whilst I was at it I made another pass at the character model and shaders, meaning I have shiny things to show off:
For shits and giggles, I also tried shrinking the player by half (making the bosses de-facto double in size). Results were memorable.
Think I'll probably lay off messing with that too much until the player attacks are properly in and I can see how much they'd be affected.
So I've gotten some test animations in play and the core of the player equipment / attacks stuff is in, it's mostly actually making the rest of the animations, hooking them up and building in the existing melee hit detection system now. Discovered some interesting quirks to how the Animator finds states by name/paths but worked around it in the end.
Whilst I was at it I made another pass at the character model and shaders, meaning I have shiny things to show off:
For shits and giggles, I also tried shrinking the player by half (making the bosses de-facto double in size). Results were memorable.
Think I'll probably lay off messing with that too much until the player attacks are properly in and I can see how much they'd be affected.
Character attack animations, with combos and root motion!
Root motion, if you were wondering, is where the animation can drive a character's position (like the leap in the third attack). It wasn't a previously enabled or used feature in the player pawn (who is normally a rigidbody whose velocity I directly control from script), so it took a bit of work to get it working properly without shenanigans happening.
As for combos, it's important to note these are custom combos, going from one action to the next according to what the equipment piece itself says it should be doing, as opposed to a system set up in the AnimatorController. This is important for two reasons. Firstly, it lets me combo any action into any other action as I see fit (always nice), but secondly it lets me define the animation transitions per combo, which is vital.
To expand on this a bit: currently, the player animation is all driven by a single AnimatorController, which is essentially the statemachine Unity uses to blend animation clips. You have states in the AnimatorController, which can transition to other states as defined within it, or via script using Animator.Crossfade. Now, when I switch equipment, it only swaps the animation clips the controller uses, it makes no changes to the AnimatorController itself nor any of its transitions.
Consider two weapons that could have attack combos: a light, quick sword as above, and a heavy mace with big, slow swings. From the AnimatorController's perspective, their combos are just EquipAttackA0, EquipAttackA1, EquipAttackA2 whose animation clips are redefined by the equipment to be sword_slash_0 -> 2, mace_swing_0 -> 2 etc. The points within the animation clips that are good to transition into the next ones cannot really be declared in the AnimatorController, because it has no bloody clue if the animations are designed to synch up for combos at frame 10, 20 or whatever. They have to be defined externally.
This has taken some shenanigans, mostly because Unity's Animator component will not tell you when a specific animation state stops playing, so you have to finagle a bit with StateMachineBehaviours to get a working back and forth 'okay am I playing this action or has it finished yet?' dialogue with the CharacterAnimator script. Once that was set up and reliable, I was able to build a queue for actions, ie if you hit the attack button whilst the first attack is playing, it queues the next one along in the combo and plays that when it's an appropriate time to use the combo transition.
Fun, slightly complicated times. I'll need to stress test it a bit to make sure it's watertight and weird glitches can't creep in. Animation code is always a little scary because everything's spread across multiple frames; I've had bugs where I've combo'd from attack0 to attack1, only for attack1 to think it's stopped playing because it received attack0's 'end of action' message and similar stupidity. Seems to be working for now, at least.
Attacks like these aren't really that useful vs the giant bosses, but they could be useful against smaller minions (which are on the drawing board) and I'm kind of toying with having the quick 'tap' attacks reflect ranged projectiles with good timing. Next up is setting up the animation logic for the actually useful 'spin slash' air move, which is going to be interesting...
Most of my early-days prototypes involved a capsule that tried to move around the world but mostly clipped through shit it wasn't supposed to. You are doing fine.
(don't use the stock CharacterController, it's pretty shite)
Most of my early-days prototypes involved a capsule that tried to move around the world but mostly clipped through shit it wasn't supposed to. You are doing fine.
(don't use the stock CharacterController, it's pretty shite)
I'm hoping that tomorrow I'll manage to get my other stuff out of the way fast enough I'll feel up to sketching out what I am planning to do on paper. Then I can probably set up some actually useful plans and maybe scratch out some script functionality or whatever.
I'm hoping that tomorrow I'll manage to get my other stuff out of the way fast enough I'll feel up to sketching out what I am planning to do on paper. Then I can probably set up some actually useful plans and maybe scratch out some script functionality or whatever.
The funniest thing about Unity's CharacterController is that not even it's(more recent)sample assets use the CharacterController. There's a few features/tools in Unity that got built in at one point but fell out of maintenance; the CC feels like one of them and the other is the tree generator thing for terrains (presumably because SpeedTree make a fancy Unity integration doodad and lots of money changed hands). They're stuff that should probably be removed, yet can't because legacy reasons.
Just have a rigidbody, constrain its rotations so it doesn't fall over and control its velocity either via script or root motion; this is true anywhere from the complex character I've got running around to a red cube on a white plane. It'll move and handle collisions for you. You'll have to do your own 'am I standing on the ground?' tests, but then you'd have to do that with CharacterController anyway because it's own isGrounded is ridiculously unreliable. CharacterController treats itself as an axially locked capsule as well which isn't exactly inspiring.
And yeah, absolutely plan stuff on paper. I have like a dozen plain-paper notepads covered in scrawls and diagrams; a lot of vector problems are a whole lot easier to intuit by drawing them out than by staring at vstudio and going 'um'.
Attack events etc are hooked up, as is the spin slash (the red boxes are visualisers for the attack's hit detection). Controls need a bit of fine tuning (you'll notice in the gif I jump, then enter the attack, because currently hitting 'attack' whilst aiming the jump doesn't go straight into it yet), but it's a workable prototype.
Now to iterate on the boss parts, so that limbs can be lopped off, there are more than one Theme to blend together and so on. Will start with the limbs themselves; those armour plates aren't going to be empty much longer.
The weapon trail, for the curious, is not the stock Unity Trail or Line Renderer, but instead the Melee Weapon Trail script. It's an oldie, but it's free and still works with only the one update due to an API change (it was released in 2011 :'D). I have them set up to emit only when the attack strikeboxes are 'open', as a visual indicator.
So in some bad news, my internet connection has slightly bricked itself to the point even uploading a gif takes forever.
Progress continues:
Armour pieces are now properly implemented, and properly block attacks. This means you'll have to strike at the weakpoints of a boss (ie joints / "where the armour isn't") unless you're swinging around a hammer or something.
This actually took a bit of involved work to setup correctly; attacks detect hitboxes as you might imagine, and the simplest approach to this is just 'abort if we hit an armour hitbox'. This doesn't get you very consistent behaviour though; the attack can easily detect a regular hitbox first, pass the 'you have been damaged' message, and then hit an armour and abort. In the end I would up adding a two-frame delay between detecting a hit and sending the damage message, just in case we hit an armour piece in that interim and need to abort. It's not a noticeable delay, thankfully, but it does prevent things like 'hitting the leg through the armour by running up really close to it and spamming attack'.
Fixed a bug that could get the player stuck on the boss and only be allowed to escape via jumping
This was due to my attempts to add a 'wallslide' as it turned out. Not the 'jump into air, slide on wall' type of slide, but making the input follow along the wall if you tried to run into it. The way I handle character movement and collisions is by having a physics capsule and controlling its velocity from code, and it had some problems with getting stuck / slowing to a crawl when pushing up against walls.
Unfortunately, the sliding code I'd been using to correct this could get caught in a trap of accidentally cancelling out the player's velocity entirely, getting them stuck in place if running up against odd collision geometry (like that of a moving boss). In the end I removed the wallslide code entirely and instead fiddled with the physics material of the player; the 'slowing' against walls turned out to be the physics engine applying friction, so I just made the player infinitely slippery and it solved the issue (don't laugh).
Fixed up the Boss's AI and gave it a 'chase player' routine.
It turns out, the Boss's AI wasn't actually running utility theory AI properly but I never noticed because it had so few options to pick from anyway. Now it has a chase routine to run after the player if it has a target but no viable attack options. Previously it would stop to idle if you ran around behind it and only stomp you with the back feet if you obligingly stood close enough.
Increased the boss's speed to the point where they can actually chase down the player now.
Yeah this was a bit of a problem; before the player's jog was enough to outrun the boss and the running and jumping just made it worse; it could basically never catch you. I was terrified I'd have to redo the walkcycle from scratch but as it turns out I just had to fiddle the settings a bit. Happy days! Combined with the above AI fix it's a lot more like an actual bossfight now.
Limb breaking is properly implemented; damage a limb enough and it goes to ragdoll, with the boss having to drag it around with itself. Break enough of them and it won't even be able to stand up.
I actually implemented this aaaages ago as part of the boss animation prototyping, but it did need some fixes, updates and integration into the part health / death messages system. It also correctly blocks AI activities now (previously it would still try to attack you with a dead limb, and because of the way the boss animation works, could still hit and damage you with them because animation skeleton != visual parts)
There's still some issues, including an interesting one involving surviving limbs once one dies; since all the legs pull on the main body to pull the boss around, once a limb dies it stops pulling. Meaning, amusingly, the boss will start crawling away from damaged limbs because it isn't currently compensating for the loss.
On Hit sfx/sound has been added
The sword attacks have their own 'woosh'ing sounds, and there's sounds and particle sfx for hitting squishy damagable bits / sparking off armour, none of which are technically required for a prototype, but all of which combine to make the combat feel a whole lot more responsive. Never before have I so wanted to post a video and so been unable to do so
Still todo on this front: hit response animations/sfx for the player (ie faceplanting into the dirt with the overhead stomp / being sent flying by the limb swipe), making hits against the boss cause the part itself to jolt a bit.
New limb / armour parts
Working on making new limb and limb armour pieces. I have a chainmail knight limb serving as the 'squishy interior' for the Knight Theme, and I'm working on metal armour, a vine-limb and tree bark for the Limb Armour / Tree Theme parts. Hit an interesting snag in the 'how do I make limb armour to fit any limb?' question, since I absolutely want to be able to cross between Themes for this (ie Knight Limb protected by tree bark armour for Knight + Tree, or vice-versa etc). Witness, the solution:
Yep, that's basically a 3dsMax FFD modifier. I have a Noise and Fit To Bounds modifier as well. TL;DR I can now distort / shape armour pieces to fit any given limb. I should just need a few pieces per Theme and just be able to re-use them in any given context by rescaling, warping etc.
So that's where things are. Still on the todo list are:
Player on-hit reactions
Basic UI (healthbar etc)
Creating and integrating the new Tree Theme (with all the inevitable 'whoops I forgot to implement that' moments now there are actually two themes for the generator to blend together)
Ways of reflecting health on boss parts (SFX, materials etc)
A proper arena to fight in
Remaining player sword animations
Boss/Player death; both the Heart and the Player can be reduced to 0 health but currently nothing actually happens
So if anyone if wants to see a really good talk on how math and video game design interrelates (@Usandru in particular, I'm thinking of you here), there's this talk on GDC about jump physics:
It's about 'how to code your character jump', but I find it emblematic in a way of the way you have to approach math in a game context, namely, make the math work for you.
The simplest way to implement a jump that I see tutorials talk about time and time again is to just a sudden, arbitrary upward velocity, perhaps at an angle, and yes, this will work; the player will fly into the air and fall down in a parabolic trajectory. But it's also completely terrible for design purposes. As a designer, be it mechanics designer, level designer etc, you want to know how far can the player jump and how high, because this is what matters for placing platforms. But if all you can configure for your jump is 'how much velocity do we add?'... well, how can you figure it out?
The math is relatively simple; it's basic integration / differentiation, equations of motion etc, but it's all about taking equations and rearranging them so that the values you as the designer control are the important ones; in the case of this talk he reframes the traditional parabolic projectile arc equation in terms of distance and height (ie the relevant factors) instead of initial velocity and gravity.
I actually kind of use these principles already in the game since I have all the ballistic equations and math set up (it's how the jump aim/prediction thing works in the gif two posts up), but it's just a really interesting talk in general (that 'increase gravity after peak height' trick is a nifty one I might borrow for later). Food for thought, etc.
So if anyone if wants to see a really good talk on how math and video game design interrelates (@Usandru in particular, I'm thinking of you here), there's this talk on GDC about jump physics:
It's about 'how to code your character jump', but I find it emblematic in a way of the way you have to approach math in a game context, namely, make the math work for you.
The simplest way to implement a jump that I see tutorials talk about time and time again is to just a sudden, arbitrary upward velocity, perhaps at an angle, and yes, this will work; the player will fly into the air and fall down in a parabolic trajectory. But it's also completely terrible for design purposes. As a designer, be it mechanics designer, level designer etc, you want to know how far can the player jump and how high, because this is what matters for placing platforms. But if all you can configure for your jump is 'how much velocity to we add?'... well, how can you figure it out?
The math is relatively simple; it's basic integration / differentiation, equations of motion etc, but it's all about taking equations and rearranging them so that the values you as the designer control are the important ones; in the case of this talk he reframes the traditional parabolic projectile arc equation in terms of distance and height (ie the relevant factors) instead of initial velocity and gravity.
I actually kind of use these principles already in the game since I have all the ballistic equations and math set up (it's how the jump aim/prediction thing works in the gif two posts up), but it's just a really interesting talk in general (that 'increase gravity after peak height' trick is a nifty one I might borrow for later). Food for thought, etc.
I've actually just now been thinking about how to describe vehicle movement - specifically tank movement - since there are interesting factors like turn speed at speed and the like that I kind of want to figure out a good model for.
(though being honest, I've mostly just been doing minor things to justify my daily progress checkmark since I've been a little busy and out of it )
As another thing though. How do you generally handle UI? Not in the sense of like, what things you allow as input, but in the sense of how you structure your code.
I mean the obvious implementation is a UI controller class that runs a lot of if-checks in Update, and if any of the UI bools return what you're listening for, you go and call the method you associate with that input.
The problem with this, I feel, is that you need more and more nested if-statements the more contextual conditions you want to account for. If input means different things based on the players context then you either need to account for that in the UI Controller Update method (awful idea) or in your Input Action methods (doable but will make them bloat a lot) or, which is what I've been trying to figure out if you can do, make a setup where you call arbitrary functions which you can have the rest of the game switch out as needed, ideally completely separating the UI system from the game logic.
The issue is just that as far as I can tell, the only way to do that is using Invoke, and you can't Invoke methods that take params, which for the most part isn't a big deal, but for some things like mouseclicks has the problem that stuff like where the mouseclick hit is pretty important.
My best idea for how to solve that is storing any important input information like where the click hit in a private variable and providing a public method to retrieve it, which isn't a terrible idea, I feel, but still a bit more interconnected than I would really like.
(I'm also a little leery of how expensive Invoke calls and the like might be, but it doesn't seem like it should be that bad, so eh)
I've actually just now been thinking about how to describe vehicle movement - specifically tank movement - since there are interesting factors like turn speed at speed and the like that I kind of want to figure out a good model for.
(though being honest, I've mostly just been doing minor things to justify my daily progress checkmark since I've been a little busy and out of it )
As another thing though. How do you generally handle UI? Not in the sense of like, what things you allow as input, but in the sense of how you structure your code.
I mean the obvious implementation is a UI controller class that runs a lot of if-checks in Update, and if any of the UI bools return what you're listening for, you go and call the method you associate with that input.
The problem with this, I feel, is that you need more and more nested if-statements the more contextual conditions you want to account for. If input means different things based on the players context then you either need to account for that in the UI Controller Update method (awful idea) or in your Input Action methods (doable but will make them bloat a lot) or, which is what I've been trying to figure out if you can do, make a setup where you call arbitrary functions which you can have the rest of the game switch out as needed, ideally completely separating the UI system from the game logic.
The issue is just that as far as I can tell, the only way to do that is using Invoke, and you can't Invoke methods that take params, which for the most part isn't a big deal, but for some things like mouseclicks has the problem that stuff like where the mouseclick hit is pretty important.
My best idea for how to solve that is storing any important input information like where the click hit in a private variable and providing a public method to retrieve it, which isn't a terrible idea, I feel, but still a bit more interconnected than I would really like.
(I'm also a little leery of how expensive Invoke calls and the like might be, but it doesn't seem like it should be that bad, so eh)
Rather than Invoke (which is intended for calling arbitrary functions at some point in the future), you'd use Send/BroadcastMessage, which does allow for parameters - only one, sadly, but that's what structs are for.
Regarding UI... I should mention 'UI' usually refers to things like the main menu and whatever doodads you need to display on screen (button prompts, healthbars etc); in terms of player actions that's usually called the Controls or Input.
A lot of control code is in the Update step yes (quite literally; you can actually miss input events if you use say FixedUpdate instead). If your controls are highly contextual however your best bet is some sort of state machine, where each possible state of the player that requires unique controls (ie underwater, or in a vehicle etc) gets its own little control handler, and the main input controller update method just calls the controller for the currently active state.
Alternatively, you can break things up a little. In my case, the controls are pretty complicated:
You have player movement (controller stick) and run state
You have the jump, which involves the many varied states of the jump button (tap, double tap, held...) doing various different things ('plain' jumps, aimed jumps, to-target jumps...)
You have the lockon state
You have the camera controls, which also interact with the lockon state
You have the action inputs, ie attack etc which are hilariously contextual (are we locked on? tap/hold/double-tap? are standing/flying/moving/aiming? what weapon are we using? etc)
I actually have separate unity components for each of these tasks. I have a division between 'Pawn' (the actual ingame character on the screen, with health and equipment and velocity and stuff) and 'Controller' (the controlling 'intelligence' behind a Pawn, be it the player or an AI bot), a design pattern I copied straight out of Unreal Tournament / the UDK. The Pawn does not care how decisions are made, it just does what it's told to. This also means I can be pretty flexible; since the Controllers in turn don't care too much about what their Pawns are, you can actually swap control to the boss in the debug builds and run around as a giant clanking monstrosity; it's how I was testing the boss animation before any AI or player character was implemented.
To handle the above complexity, I broke up the Player Controller (which reads inputs and sends them to its Pawn) into a series of subcomponents, each one dedicated to one of those given tasks. The PlayerLockonInput component for instance only interacts with the PawnTargeting component on its pawn, which is where 'what am I locked on to?' is actually stored (the pawn needs to be able to rotate to face its target, the player headlook thing needs to know etc, so that's info I store on the pawn rather than the controller).
In the most extreme case, handling attack inputs, the Controller quite literally just sends inputs directly to the Equipment itself, which then decides how it responds to it if at all and acts accordingly.
Tl;dr, if you're running into a situation where you're looking at an absolutely monolithic class, this is the point where you step back and look at ways to break it up into smaller component chunks. Try to find ways to break things down into tasks that require the minimum amount of information (ie 'connections to other components') as possible.
I assume you have an idea what do you want your project to do, maybe even a "blueprint" or ten. But if something doesn't work, or you change your mind, how do you operate?
Is your project a closed thing, or it is designed with "expansion-capability" in mind? Is there a method for such thing, or does modularity covers that aspect?
I assume you have an idea what do you want your project to do, maybe even a "blueprint" or ten. But if something doesn't work, or you change your mind, how do you operate?
Is your project a closed thing, or it is designed with "expansion-capability" in mind? Is there a method for such thing, or does modularity covers that aspect?
Apologies for the late reply; internet derped out for the entire day (sigh).
If something doesn't work, the important thing is to try and isolate why, look at what design goal / necessity that feature is meant to fulfil and either come up with a way to fix it or find an alternate method. Then just keep on chugging.
In my case, I had two major 'oh shit we have a problem' areas: the boss animation and the player melee attacks. In the case of boss animation this was mostly a technical issue. Some of the tech-side stuff has been solved with third-party assets from the asset store (Final IK for inverse kinematic solvers, Curvy for the spline paths used to drive some of the animations), though the animation and blending side of things I did by hand (since that's... y'know, kind of project specific).
Player melee attacks; this actually came up in an older prototype (where I actually started designing the player character and their controls first before the bosses, which proved less than smart), but basically the initial pass on player melee and controls was Dark Souls esque; telegraphed swings and fixed animation with no real jump to speak of. This design idea promptly fell flat on it's face as soon as the part-based health system of the bosses meant the short range melee swipes were worse than useless. GG, lesson learnt: do not just blindly copy off Dark Souls (read game design tuts and game reviews these days and you start to wonder if this is an industry-wide running joke of some sort ).
But yeah to solve that problem I reworked the jumping and made the player extremely acrobatic, so that vertical height mattered for less and melee was still viable.
As for extension, it should be possible (in terms of 'adding more Themes into the boss generator'), and there are features in Unity that make it much easier. How much easier though, it's too early to say. Things are intended to be modular enough that I can add, say, character costume pieces, weapons, spell and boss parts without having to make a fresh game build, which is what supporting modding requires in general.
That said, boss parts in particular are pretty involved and may well require the third-party tools mentioned above, so it may not be possible to release a mod kit completely for free, because of the asset store licenses. It's a fuzzy one.
Watched the video - and a bunch of other GDC '17 videos too. The Experimental Gameplay Workshop was fascinating as always, and there's a pretty neat overview of Procedural Content Generation by Kate Compton which I really want to find the slides for so I can check out some of the stuff she references more easily. As a rule of thumb I'm really suspicious of PCG, but on the other hand I find it incredibly fascinating, and since I kind of detest open-world games, I'm not liable to fall into the obvious PCG trap so it'll probably be alright.
Also I cannot into art, but I can into math, so some funky but pretty terrain stuff might spruce up my prototypes a bit.
On another note. I confirmed today that we were talking about different Invoke methods. You were thinking of the one in Unity, but I'd been doing a lot of searching on calling methods without knowing which one in advance, and I'd found a class in C# called MethodInfo() which is part of System.reflection, which you use to basically store a method, and then you can invoke on that method passing an object you want to invoke it on, and a list of objects that are the params of the method you're invoking (or null, if it's 0 params).
This would be what I was looking for, and hopefully it's a useful alternative to states because I'm kind of stubborn and I want to make this design work.
I'm still pretty leery of the performance cost though, so I'm going to have to set up some testing.
Ooooh, Reflection Invoke. Sorry, I'm familiar with Reflection, but I learned about Unity's Invoke first because Unity is where I learnt C#.
Yeah, I've used Reflection a couple of times, but only where it's completely unavoidable (specifically it was for an in-Inspector event scripting tool for those 'hook up this specific button's OnPress event to this specific door's Open() function' moments; to handle level scripting, basically). Honestly come to think of it I've probably used Reflection more times than I've used Unity's Invoke; I prefer Coroutines if I need code execution across multiple frames. I don't think I've ever used Unity Invoke, come to think of it.
Send/BroadcastMessage though is pretty useful in some specific cases; any component on the affected gameobject/s with a function by the name passed gets called. The Boss Animation system makes use of it to call events for spawning sfx / stop/starting attack hitboxes and so forth at given points in animations, specified in the Unity Inspector. It can give you an extra degree of flexibility, basically. Performance hit isn't too worrisome as long as you don't do it too many times per frame (in the anim events case, it's only one or two times every now and then, so it's fine).
Oh, speaking of unexpected performance hits and Reflection, one thing I should probably warn you about is Debug.Log. It's useful, but Unity does some Reflection shenanigans so it can tell you what line the message was called from. Makes it easier to track them down, sure, but it does mean that multiple Debug.Logs per frame can really tank your framerate. Watch out for that, and don't try logging something every frame, it's awful; if you're trying to debug something that operates over multiple frames, just expose a property to the Inspector (or in less fancy terms 'make it a public variable') and you can just watch it in play mode. Quite a lot of my components have variables public that probably shouldn't be just so I can watch them in the Inspector and keep an eye on what they're doing.
Send/Broadcast looks super powerful, but I have developed an inherent aversion to doing things that cast such a wide net.
(the perils of a dad with a job as a software architect and a proper comp.sci degree I suspect)
Though it should be fine as long as you just keep good documentation, eh. Something to keep in mind for later. I should actually build more prototype before I get more fancy ideas.
As for making things public for debugging, shouldn't you use [Serialize Field] for those vars then? As I understand it that should expose them well enough, without making them accessible from other scripts.
Send/Broadcast looks super powerful, but I have developed an inherent aversion to doing things that cast such a wide net.
(the perils of a dad with a job as a software architect and a proper comp.sci degree I suspect)
Though it should be fine as long as you just keep good documentation, eh. Something to keep in mind for later. I should actually build more prototype before I get more fancy ideas.
As for making things public for debugging, shouldn't you use [Serialize Field] for those vars then? As I understand it that should expose them well enough, without making them accessible from other scripts.
Oh yeah, it's super-easy to cock things up with it via typos and the like. But... well, it's one of the few ways you can script 'in editor' as it were, as opposed to defining interactions purely through code, which is less modular than you'd think it would be. Sometimes - usually, level scripting - you have to cast a wide net to give flexibility.
To give an example use-case, for the Boss Generator, there is a divide between the animation of a Part and the visuals of a Part; one is a 'Skeleton Group', ie 'LimbGroup', which is essentially an 'abstract' form of the given part holding all the animations / AI activities, which in turn spawns a Visual Part that serves as the 'concrete' form (if you will) of the actual Part, ie the Knight Theme's armoured limb piece. This basically means that if I ever want to add an animation like a new limb attack, I build it for the LimbGroup prefab, and all limbs get the benefit of it. However because of this talking between the Skeleton Group and it's Visual Part is somewhat complex, as the Group itself has no idea what Visual it has or what components it has on it (beyond the VisualPart component). It's most obvious with sounds and sfx (specific to the Visuals, but called by animations in the Group) and health (likewise; hitboxes and health components are Visuals specific, but the Group still needs to be aware of how dead it's supposed to be). There's some other shenanigans like the Visual Part having to tell its Group how big it is, ways for the Visuals to 'configure' the behaviour of their Group and so on. But SendMessage is vital for this to work at all (Broadcast is not safe for this though, as it goes to all child objects and thus any child parts).
There is some way to hook up events and listeners together from the inspector (the GUI components, for example), but how you can set it up for your own events is not well documented and I haven't been able to figure it out unfortunately. It would make things greatly simpler, though.
And yes you could just use [SerializeField] on private vars but I'm lazy and kind of improvised this method of debugging on the fly (don't pick up my habits). I should probably go through my code base with a proverbial hoover at some point...
I'll admit I'm not very consistent with it. Lately I've been trying to improve; nowadays if there's a variable I need to see changing in real-time in editor (as opposed to the usual breakpoint debugging), I add '__' to the variable name, ie '__debugVar' etc. Unity's Inspector will remove the first '_' when it converts the variable name to a display string, but not the second, so I use it as a marker for 'this is exposed for debugging purposes not configuring the component'.
So it turns out there's quite a leap between 'making a mesh deformable' to 'making an entire gameobject deformable'. Lots of shenanigans involving coordinate spaces, relative coordinates, bound calculations, order-of-operations glitches (have to ensure child objects aren't deformed before their parents or they get effectively deformed 'twice') and all manner of fun.
(This is a gameobject with two deformable meshes and one deformable transform (the white head thing) being deformed by a bezier surface to curve it, then a noise modifier for distortion. I'll probably never ever use the noise modifier, but it's a useful tester)
I thiiink it's working now to the point I can actually start using this to make deformable Boss parts that fit to any given size / shape requirements, so wish me luck.
(and yes I maaay have dropped down a certain black hole for a few days as the avatar may imply. Trying to get back on the wagon now. Ye gods my melee mechanics / animations are never going to beat Platinum Games, are they?)
Lots of shenanigans involving coordinate spaces, relative coordinates, bound calculations, order-of-operations glitches (have to ensure child objects aren't deformed before their parents or they get effectively deformed 'twice') and all manner of fun.
Yes, unity can be funny that way with the scales. I once tried to make a procedural planet (of the arbitrary detail kind) by subdividing and "spherizing" the faces of a cube, and while the mesh has the right shape I'm yet to figure out what the hell it did to the normals.
And spawning things on the deformed faces was very 'entertaining' on its own after the fact. Or maybe I need formal education on points, vectors and tensors.
Yes, unity can be funny that way with the scales. I once tried to make a procedural planet (of the arbitrary detail kind) by subdividing and "spherizing" the faces of a cube, and while the mesh has the right shape I'm yet to figure out what the hell it did to the normals.
And spawning things on the deformed faces was very 'entertaining' on its own after the fact. Or maybe I need formal education on points, vectors and tensors.
I wouldn't say you need one (I mean, I don't have one); you can generally find this stuff out via your own learning. The trick is in figuring/finding out which specific bit of maths you need to learn. It's something of a broad topic, after all. Never helps that the more complex stuff always gets described in the most obnoxiously obtuse manner possible.
As for 'turn a cube into a sphere', have you looked at the Catlike Coding tutorial series at all? They're found here and cover the cube-to-sphere thing you mentioned, as well as tutorials on perlin noise and its derivatives (what you'd want for the normals), amongst a range of other topics.
Catlike Coding's tutorials in general are very informative and useful if you need a grounding in a certain topic. I highly recommend them.
As for 'turn a cube into a sphere', have you looked at the Catlike Coding tutorial series at all? They're found here and cover the cube-to-sphere thing you mentioned, as well as tutorials on perlin noise and its derivatives (what you'd want for the normals), amongst a range of other topics.
I had seen the deformation one, but I didn't know they were more. Thanks!
About the sphere, I know that part (normalizing vectors, increasing magnitude depending of the value of a 3D noise function at the resulting point). But for some reason, It never quite worked with the normals, and the specular light reveals the uneven "seams" between the planes. One of these days I should tackle that problem again.
The knight limb is now a 'squishy' chainmail piece that any weapon can harm, and spawns in extra armour plates to protect it that are reshaped to conform to it. Currently there's only the top part but you get the idea. The 'undeformed' piece is just a flat plane, though it does need enough geometry to deform well so it's polycount is higher than you'd think. Collider deforms as well.
(this image uploaded after several rounds of imgur getting caught in a 'are you a bot?' loop =_=)
Looking good! The third image reminds me of the Looking Glass Knight, which was a pretty nice design IMO. How reflective is the surface?
IIRC I read somewhere that reflection is a stupid expensive thing to force your GPU to do, but I kind of wonder if you could get it working on that set of stuff. Might look super cool.
Reflection is a bit of an odd duck; there's been a number of techniques arriving recently (that you've already seen in modern titles) to make it a whole lot easier.
Currently it's just reflecting the skybox but I could set it up to have proper reflection samples in the scene. Funnily enough the 'reflect cubemaps generated in the level' technique for runtime reflection was done as early as Half-Life 2. Christ those were a pain to work with at times, though. There's also some techniques that take advantage of the theory behind Deferred Rendering to do 'screen space ambient reflections' through !!SHADER WIZARDRY!! but god knows how that bloody works.
True and accurate reflections and light bounce of the sort you see in CG films are expensive as hell yes but this is games running realtime and we will cheat like utter bastards wherever we can get away with it. A fast, innacurate result that still looks passable is a-okay in our books.
The materials are by no means final. I mean they're not even textured short the chainmail limb.
Got everything hooked up for armour collisions, hitboxes and yaddah and well as the lower limb plate: