So music was kinda one of the main points of this project. This is in direct contrast to most of my other 'games', that either had no sounds, or they were an afterthought added when it was close to first beta. (where they also ended)
First question is, where to find music? I want the project to be free, so any assets I use must also be free. There is a lot of free music on the internet, but I didn't even have to search this time. A while back, I was listening to Tilde radio, as I do, and I saw a website link listed as an artist of the currently playing song. I bookmarked it for further inspection and then forgot about it.
Now I remembered. AudionautiX seems to be a repository of various music made by one cool guy, all licensed under CC BY 4.0.
For now, I used Marauder and Big Car Theft.
Now that I have music it's time to play it somehow. Raylib for some reason differentiates between Sounds and Music. You use Music for music and Sound for sounds. Simple.
It might not be the most efficient way of doing things, but I load the music from file every time I want to switch to it, so only one Music is loaded at a time. It goes with my goal of trying to minimize memory footprint.
It's not like I plan to switch music mid level anyways. It is barely noticeable on my device, but I'm not sure how would it feel on a slower hard disk.
So beams will no longer be random.
First thing I should probably mention is, that I know nothing about sound representation. So the first thing I've done was to use AI to generate an example raylib music visualization program in C.
With this piece of code to study, I was able to deduce the following:
To analyse music, you need to load it as a Wave. You don't need to store it as .waw, it can extract .mp3 just fine.
First I need to extract an array of samples from pointer to ambiguous data. Currently I only support 32 bit waves. I will add 16 and 8 bit support if I ever need them, but I will not search for 16 and 8 bit music just to test it.
I also discard one channel, Let's hope it does not screw me later. I also apply the 'abs' (absolute value) function that is nicely built into Pascal, as I don't really care which side the wave goes, only how much.
Now that I have the samples, I use the following formula to calculate how much samples will fall into one beam:
MovePerSecond := TARGET_FPS * RoadSpeed; SecondsPerGap := BEAMS_GAP / MovePerSecond; SamplesPerBeam := trunc(SecondsPerGap * SampleRate);
Then I set the length of the beam sizes array:
setLength(MusicData, length(WaveSamples) div SamplesPerBeam);
I should probably mention my strategy. I have only a small number of beams, but I change their value when I loop them back to the back. I have all the values of them stored in an array.
At first, I fill the array with the average value of all the samples for said beam. Here I keep note of the lowest and highest average and when all is done, I transform the values to the beam size range.
This way, every song uses the entire spectrum.
I tried a lot of ways to determine the final value, but they all tend to convert to a long line of max values. Average at least alters a bit more, but I still would not call it perfect.
And now to something different. You can shoot now. There is not much to shoot at, but it's interesting for other reasons.
First, your shots are one horizontal and one vertical texture crossed like so:
+ /| / | / | +--/---|------+ / + .| / / | . | / / | . | / / |. + / +------|------+ | / | / |/ +
This means that I have to finally implement rotation. I was prepared to do some serious math, but then I looked into raymath and found the 'Vector3RotateByAxisAngle' function. So I used that. So what?
One problem is that if you want to have the same origin for all args, determining the offset is a bit unintuitive, as you first move the texture to it's place and then rotate it around origin.
You also have to draw vertical textures twice, once for each side.
This presents yet another problem: drawing order. You see, let me tell you how drawing in 3D works.
When you draw a thing in 3D, your GPU first looks what parts of that thing are behind some other thing, so that it does not draw them over it. This makes sense, but it has one tiny flaw. It takes transparent pixels as solid things that can obscure stuff.
This means that if you draw transparent textures, you must draw the ones that are the furthest away from camera first and work your way forward.
This approach works, as long as all the textures are not crossing each other. In other words, I'm kind of in a bad situation.
There is one other way, tho: The Alpha discard shader.
This shader will tell the GPU to ignore a pixel, if it's value is bellow certain alpha value. It will slow the entire process a bit, but I'm doing only very basic GPU operations, so this should not be a problem.
I generated the shader with AI, as I don't want to study graphics programming right now. This is why I use raylib after all.
Yes, I have AI generated code in my project now. Feel free to diss on me all you want.
Turns out you don't compile your shader, but you give it to the GPU to compile at startup. It makes sense, as each GPU uses different machine code.
One more fun thing to mention. When the shader is active, all non-textured shapes turn white, as they store color in a different way. This is not a problem tho, as I only need it to affect textures.
Well, something to shoot at would be fun. Also some sound effects, when I'm already doing the music.