I am trying to write a 3D rendering algorithm (software rendering) and looking for fastest and smallest memory using one, will be on MS-DOS (no memory extension). I know most common is using z-buffer and there are some other common techniques, but they all either require CPU power or high memory. I have come up with a method that might work better using less memory but I need help to make it faster. If you guys know very fast and very low memory using algorithms please inform me about them.

Now the question is:

Camera is at origin, looking forward along z-axis. There are two non-intersecting triangles ahead. Imagine no clippings required. Triangles will be solid color filled, for example one is red other is blue triangle. When projected on screen, the two triangles do intersect on 2D screen. So a decision will have to be made which one is "ahead" of other one. I need fastest algorithm that decides which one is ahead.

So far I can do it like this: pick a pixel on screen that lies inside the intersection area. Shoot ray from there and find depths(zp) at which ray intersects the triangles in 3D. Smallest zp will be "ahead".

But maybe there are faster ways to do this? My algorithm uses about between 90 to 160 simple math operations (+-*/) to find the result. Too costly compared to what I was aiming for.

Note: The above example scene is given specifically to aim that specific algorithm, it is just an example to ask that question. Final renderer will be able to handle a lot different scenes. Also, my aim is to create a renderer to handle dynamic scenes, so ideas like BSP or sphere/cube occupant will not work.

You could sort the triangles to be displayed into depth order and then paint them from front to back (or the just the front one if that is appropriate). This will be faster than finding the nearest at each pixel because you only have to do the depth calculation once when sorting the polygons. If you know there are no intersections (so a triangle that has one vertex behind another triangle's vertex has all vertices behind all vertices of the other triangle) then the depth calculation is just the Z coordinate of the first vertex of each triangle. For more complex cases. you may need to check all vertices and possibly split intersecting triangles so they do not intersect.

Edit: just noticed you mention BSP trees. This is still useful with dynamic scenes. We used it in some software that modelled airplanes flying around. It was worth building the tree for each frame. But that might have been because we had to render multiple viewpoints at the same time. (This was decades ago, and I haven't done any 3D work since, so I don't remember many details but just don't rule it out yet.)

Algorithms that use little memory generally achieve that by using more complex computation. Raytracing, for instance, can directly render mathematical surfaces and solid geometry without reducing everything to meshes, and only needs to render a pixel at a time, but is very slow. Methods of speeding up even fast algorithms generally involve using more memory. In this particular case, a common approach is a z-buffer, which involves adding an entire image channel to store depth information. Other methods involve precomputing information or arranging the data into complex data structures.

3D graphics is processing and memory intensive. There's a reason graphics cards with big chunks of high-speed RAM attached to arrays of processors exist. On a machine that's severely limited in CPU and memory, there are simplifications you can make...for example, you can make the world "2.5 D", with vertical walls and horizontal ceilings and floors, simplifying much of the math. Constructing the world as cells connected by portals is also a simple way of getting the drawing order right and doing basic occlusion testing. And using sprites instead of 3D models. In other words, use the approaches games that actually had to run on such hardware used.