benArcen

Unity Questions Thread

Recommended Posts

I get a noticeable framerate drop every time I press or release a key in my game. I'm using GetAxisRaw to handle keyboard movement. (GetAxis had a huge delay in registering release, I think because it interpolates from -1 or 1 to 0 over the course of several frames when using a keyboard.)

 

Anyway, I get a near constant 60fps, as expected - the game is very simple 2D graphics with not much going on, yet. But on either pressing or releasing the key, it'll drop to either 58 or 59 and then immediately go back up. I'm using Fraps' FPS counter for this. It probably wouldn't be as big of an issue if the camera didn't follow the player around, but even if it didn't, once I get lots of moving objects on the screen, it becomes more problematic.

 

So, any ideas?!

 

I'm about to make an empty project with a single script that reads input and see if it has similar drops in framerate. EDIT: Good, confirmed it's my code. Mean's it's fixable. Also it could just as easily be linked to what happens when you do some inputtin', as that results in player movement. But it's weird that it'd happen both on pressing and releasing since they are two separate paths of code. Hmm.

 

EDIT EDIT: Um, I made a brand new scene in this project and put nothing in it at all, not even a camera. Input still makes the framerate drop. If I repeatedly mash keys I can get it to drop as low as 30fps...

Share this post


Link to post
Share on other sites

So... turns out, it wasn't a real problem. I built it for standalone outside Unity, no framerate drop. 

 

The problem, for some reason, is that Maximize on Play option creates this situation where literally any key press or release on a keyboard will cause a sudden framerate drop. If I turn that option off, I no longer have the issue. I wouldn't even have that option on if I weren't using my laptop right now, which has a much smaller screen.

 

The More You Know

Share this post


Link to post
Share on other sites

Why are you using GetAxis for keyboard input? You'll get better results by using GetKey, or by setting up buttons in the input manager for it.

Edit: If you have the keys bound to buttons in the input manager, you can edit the responsiveness on releasing the keys by changing the "Gravity" setting.

Share this post


Link to post
Share on other sites

I didn't write this particular code, my friend did. But GetAxisRaw serves our purpose perfectly well. There's no reason to use GetKey over it.

 

GetAxis DOES go through the input manager. Also we want to support controllers, as well.

Share this post


Link to post
Share on other sites

Why are you using GetAxis for keyboard input? You'll get better results by using GetKey, or by setting up buttons in the input manager for it.

Edit: If you have the keys bound to buttons in the input manager, you can edit the responsiveness on releasing the keys by changing the "Gravity" setting.

 

A good use of "GetAxis" is for any type of key press that you need variation on.  For instance all the character controllers use it for WASD controls.  I have used it to great effect for controlling throttle in vehicles.

Share this post


Link to post
Share on other sites

Anyway, I get a near constant 60fps, as expected - the game is very simple 2D graphics with not much going on, yet. But on either pressing or releasing the key, it'll drop to either 58 or 59 and then immediately go back up. I'm using Fraps' FPS counter for this. It probably wouldn't be as big of an issue if the camera didn't follow the player around, but even if it didn't, once I get lots of moving objects on the screen, it becomes more problematic.

 

So, any ideas?!

 

Are you perhaps doing a print statement when a button is pressed, or are you seeing errors in the console?  Any time you write to the console you can get a  frame rate drop, even once or twice a frame this can be noticeable.  Also, a 1 or 2 frame drop really isn't really cause for concern if that is all you are seeing.  Finally if you are interested in keybinds that can be rebound at runtime, you might want to check out cInput, It uses a similar set of methods to Unity's Input class.

Share this post


Link to post
Share on other sites

1) I already solved the problem by discovering it wasn't a problem - it was only a problem with the Unity editor Maximize-on-Play option, as said in the follow-up post.

2) No logs.

3) It was very noticeable, as said in that post you quoted. It was a very obvious jump in the camera movement. It was more than just a 1 or 2 frame drop, as clarified in the edit in that post, as mashing the keyboard in an empty project with literally nothing in the scene, not even a camera, will result in a massive framerate decrease down to as low as 30fps (could probably get lower if I could physically strike the keys any faster). It might be a problem exclusive to my laptop, or at least my level of hardware, though. I'm going to be home soon and can test on my infinitely more powerful desktop.

Share this post


Link to post
Share on other sites

NEXT QUESTION

 

I have two EdgeCollider2Ds. They do not collide with each other. Why? They are nothing but edge colliders with arbitrary vertices and a sprite so I know where they are. One has no rigidbody, so it is a static collider. The other has a rigidbody and just falls right through it.

 

Neither has a layer assigned, so it's not anything to do with that.

Share this post


Link to post
Share on other sites

Have you tried putting a rigidbody on both of them? The one that doesn't hasn't had one can still be kinetic if you don't want it to move. 

I'm just guessing, I don't really know.

Share this post


Link to post
Share on other sites

NEXT QUESTION

 

I have two EdgeCollider2Ds. They do not collide with each other. Why? They are nothing but edge colliders with arbitrary vertices and a sprite so I know where they are. One has no rigidbody, so it is a static collider. The other has a rigidbody and just falls right through it.

 

Neither has a layer assigned, so it's not anything to do with that.

 

I haven't used the 2D physics as much, but here are some suggestions for when I've seen issues like this with the 3D system.

 

1.) All objects are on a layer whether or not they are placed there manually, so you might want to check to see if those two layers are set to collide in the layer collision matrix.  Also double check that those colliders are enabled

 

2.) Are the objects triggers or standard colliders?  Also, for any collision to take place at leas one object needs to have a rigidbody component attached.  However, moving objects with trigger colliders must have a rigidbody attached for events to fire properly

 

3.) Are these objects children of something?  this can sometimes affect collision event notication, as the trigger and collision events are sent to the parent if the child object doesn't have a rigidbody

 

4.) are the two objects at different z depths?  I'm not sure if this matters with 2D collision, but it may.

 

5.) How many points/vertices do the two edge colliders have?  Generally in my experience line/edge colliders can sometimes have a problem of penetrating through one another, completely missing the event.  You tend to see the same problem in 3D with very small colliders

 

As a test, the first thing you should do is make some test scripts that assign to the 2D collision enter/stay/exit events just to see if they are firing or not.  If this isn't happening, try running some raycasts to see if the objects are lining up properly.

Share this post


Link to post
Share on other sites

1) They're all on the default layer and all layers are configured to interact with all layers.

2) Standard colliders. One of them is not static and has a rigidboy2D attached.

3) Not children. Object A has a transform, a sprite renderer, and an edge collider. Object B has those three and a rigidbody2D.

4) They're at the same Z depth, but, yes, it doesn't actulaly matter for physics. Only for rendering does Z-depth matter.

5) One has three vertices, the other has like six. At the moment, anyway. I've tested with various configurations of vertices. I first noticed it with objects more complex (two enemies made of edge colliders walking right through each other, when they should collide and push against (well, bounce off according to the script I wrote, but either way SOMETHING should happen)). Then I moved down to a much simpler test case.

 

And yeah that's the first thing I did. ): The edge colliders do collide with other objects. Just not each other.

 

I suspect edge colliders are not meant to collide with each other. Maybe they were originally built with the intent of never being anything but static. It would make some sense. But I don't know. I hope it's a bug somewhere.

Share this post


Link to post
Share on other sites

Are you using continuous collision?  I feel like edge colliders could have trouble with predictive collision systems.

 

Maybe try turning your fixed update rate up, and your time scale down to see if that affects their behaviour.  

Share this post


Link to post
Share on other sites

I think I did try that. I'll try again RIGHT NOW.

 

EDIT: Yeah they are continuous. No good.

Share this post


Link to post
Share on other sites

I'm aware of polygon colliders. They don't work.

 

I've got a tile-based level setup. Boxes get caught and stuck on corners of tiles, even on a flat surface. Polygons get caught and stuck on corners, even on a flat surface. Both of those are sort of expected. But then CIRCLES have their own problem. They appear to work at first, but then you notice that every so often they "hop" a little bit when they hit the corners. At least they don't get stuck, but... it's not good enough.

 

And then for some reason, edge colliders work perfectly.

Share this post


Link to post
Share on other sites

I'm trying to figure out how something like this is optimized.

 

Warning, it's not optimized so it might freeze your shit after a while.

https://dl.dropboxusercontent.com/u/92741283/painted%20coil/8-13-2014/8-13-2014.html

 

Because I'm just forever instantiating more gameObjects, the drawcalls, tris, and verts will constantly increase while frame-rate goes further and further down. I was thinking that there might be a way to consolidate drawcalls or something on the older cubes (I don't want to destroy them) since once the objects are instantiated, they don't do anything by suit there in 3d space. Is there a simple way to do this?

Share this post


Link to post
Share on other sites

I'm trying to figure out how something like this is optimized.

 

Warning, it's not optimized so it might freeze your shit after a while.

https://dl.dropboxusercontent.com/u/92741283/painted%20coil/8-13-2014/8-13-2014.html

 

Because I'm just forever instantiating more gameObjects, the drawcalls, tris, and verts will constantly increase while frame-rate goes further and further down. I was thinking that there might be a way to consolidate drawcalls or something on the older cubes (I don't want to destroy them) since once the objects are instantiated, they don't do anything by suit there in 3d space. Is there a simple way to do this?

 

Can you post the code you are using the create the objects?

 

From what I can tell it seems like you are generating an object every frame, or at least every fixed Update.  Part of this could be checking to see if the last object instantiated is a significant enough distance away before creating the next object just to reduce the amount of creation.  If you do need that many objects, you could try generating them all in the first frame, then turning them on as necessary but you'll still run into an upper limit.  Also if you need to be creating that many objects, you might want to look into particle systems instead provided you don't need any collision on them.

Share this post


Link to post
Share on other sites
using UnityEngine;
using System.Collections;

public class InstantiateCubes : MonoBehaviour {

	public GameObject cube;
	Transform ourTransform;
	
	//timer
	float clock;
	public float cycle;
	
	// Use this for initialization
	void Start () 
	{
		ourTransform=this.GetComponent<Transform>();	
	}
	
	// Update is called once per frame
	void Update () 
	{
		
		clock=clock+Time.deltaTime;
		if(clock>cycle)
		{
			clock=0;
			MakeCube ();
		}
		
	}
	
	void MakeCube()
	{
		Instantiate(cube, ourTransform.position , Quaternion.identity);
		
	}
}

I get what you are saying about lowering the quantity of objects I instantiate. 

 

I know nothing about particle-systems. How are they fundamentally different?

 

I think I just don't have a foundational understanding of why some things increase draw calls. I would think that because I'm using the same material for all the cubes, it would only require one drawcall for each of them. I'm saying this so that you understand how ignorant I am on the issue. 

Share this post


Link to post
Share on other sites

The particle system suggestion was based on my assumption that what you seem to be doing is creating some kind of visual effect which can be done with a particle system and a bit of math.  Using the same material can help with reducing draw calls, but Unity's batching can only do so much, especially with large numbers of objects.  If you are running into draw call issues with just an empty scene, then its likely the implementation won't work in a level with with a bunch of other stuff going on (image effects, UI, background, etc).

 

Essentially particle systems are really just a cheap way of creating visual effects.  Instead of generating a large animation or 3D model, you can arrange textures in such a way as to create a similar looking effect.  A particle system is just something that emits particles (a texture/sprite that always faces the camera), then applies some movement, rotation, etc logic to them.  They are great for creating visual effects, and can sometimes be useful in other scenarios.

 

Out of curiosity, what is this for?

Share this post


Link to post
Share on other sites

Out of curiosity, what is this for?

 

Just the thing I posted. I wanted to make something quick and easy, so I did. But once I started to see its limitations, I was wondering how they are typically overcome. 

 

I was joking with my wife that I could put a little narrative in it where the player is told that they have been hired for various jobs and then they go into the exact same screen as I posted above. SO for instance the narrator says "You've been tasked with decorating a space-cake with colored icing!" and then after you complete it the narator says "Looks like one of the sky-writers is sick, looks like the job is up to you!"

Share this post


Link to post
Share on other sites

I'm trying to figure out how something like this is optimized.

 

Warning, it's not optimized so it might freeze your shit after a while.

https://dl.dropboxusercontent.com/u/92741283/painted%20coil/8-13-2014/8-13-2014.html

 

Because I'm just forever instantiating more gameObjects, the drawcalls, tris, and verts will constantly increase while frame-rate goes further and further down. I was thinking that there might be a way to consolidate drawcalls or something on the older cubes (I don't want to destroy them) since once the objects are instantiated, they don't do anything by suit there in 3d space. Is there a simple way to do this?

 

So GameObjects in Unity can be expensive(as your demo shows), and this is because GameObjects are kind of meant to be for complicated things that are going to be scripted somehow, interacted with by the player, or interact with the environment. For your demo here you should build a particle emitter that spits out meshes. Particle systems are used with sprites a lot, but there's nothing stopping you from spitting out your cubes too.

http://docs.unity3d.com/Manual/ParticleSystems.html

 

You could also combine the meshes of all your cubes together into a single game object as in the following question on the Unity site:

http://answers.unity3d.com/questions/10165/combine-objects-instantiated-at-runtime.html

 

Basically, in order to optimize, you need to find a way to cut down on the memory footprint of each individual cube being constructed. GameObjects are handy, but there's a lot of overhead because there are a lot of things going on behind the scenes to make them so handy. This is also why there's an overall global limit to how many you can have in a scene. Right now there's a lot of cool possibilities because all those cubes are individual game objects, but your demo isn't really using any of those features. So optimization is about making the decision about what you want those things to do and making things more efficient by doing away with the possibilities you don't need.

Share this post


Link to post
Share on other sites

Thanks y'all, this will be some good stuff for me to chew on for a while. It's so nice to have this here so I can ask for direction and get it. 

Share this post


Link to post
Share on other sites

I'm working on a laser beam that uses a trigger collider to determine if it is hitting something or not. The collider is scaled to the length and size of the beam like so:

beamCollider.size = new Vector3(size / scale.x, size / scale.y, Vector3.Distance(aimPoint, transform.position) / scale.z);
This works perfectly in most cases, but the scaling becomes inaccurate to the size of the beam when the parent transform of the beam is scaled non uniformly (eg. has a scale of y = 1, x = 2, z = 3). I'm sure this is all down to some 3D math concept that I don't understand, so my questions are: What's going on here, and can I account for it in my scaling algorithm?

And before you ask: No, I can't use a spherecast to determine collisions because for some unknown reason Unity's SphereCast is unable to detect compound colliders, making it useless for anything involving hitboxes.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now