I would like an example on how to draw a sphere, with texture on it (e.g. checkered), in modern XNA (as per MonoGame). Everything should happen programmatically (model and texture are generated, not loaded from files). Brief and tractable is a plus.
The end result would look like
except for the shadow and the fact that I don't care how exactly texture deforms towards the edges.
Indicating prior research: I've tried SpherePrimitive but could never put texture on it. If you can, consider it is already present. I have also tried this example from the web, it didn't compile agains MonoGame, when it did the end result did not look like a sphere.
A bit of context: I'm using XNA because it looks like the only cross-platform way to do 3D in C#, but I'm doing a simple visualization so I only ever need spheres so far.
It turns out that XNA/MonoGame is a particularly poor fit for that task. Contrary to my expectations it is not a 3D game builder, but 3D game engine builder. APIs provided are too low-level to tackle the described task in one go.
I have tried Unity 3D but it makes development Unity-centric, which is a nuisance given that I already have code base and tooling selected.
I am currently settling for using OpenTK for the job. It is still fairly low-level, but:
It is a no-frills solution. Create window when needed, dispose when done. It doesn't force a certain mindset, tooling or entry point, being a library.
It is basically OpenGL, for which there's a huge amount of code already written and available for review.
It's one small and stable dependency. Smaller number of layers of indirection.
I have already found a working textured sphere-drawing example
Related
Edit: As comments said, this question is way too complex so I only hope to get answers which focus on one certain part of this question. Welcome to any ideas in your mind.
I'm working on a project where I am supposed to analysis GIS (vector) data to extract road features like axis and border in a large scale city road network, then procedurally generate the 3D road model representation based on these features, and the objective is a 3d editable road network geometry in Unity which I can easily adjust the position/width/height... of each road in it(and of course get immediate proper feedback like what we can see in SimCity or City:Skyline, in other words, we can set procedural modeling parameters in Unity and directly see how it influence the result geometry).
Since I'm quite new to these things like GIS, Unity, and Maya, I'm wondering if you know what is the correct workflow? And do you have recommended tools for doing these? Especially, where and in which step should I write the algorithms for procedural modeling? Because it seems that Unity is just a rending/gaming tool and can't build a complex model itself so maybe I must build up the geometry before importing it to Unity3D, but can I adjust just the model and get immediate feedback inside unity if I do so?
In my assumption, I guess I need to:
use GIS tools like Qgis to export roadmap data in shapefile format--usually points and lines
then find somewhere to transform it into spatial coordinates and do the math things to extract road features
do the procedural modeling things to get a "dynamic" model
finally, render it in Unity
If so, which tool/platform is good for each step?
Thanks a lot!
There are Unity extensions available that can generate roads. A couple examples I've looked into:
Easy Roads 3d
can import from Open Street Maps (OSM)
Assuming you need to use a source other than OSM, you could translate from your GIS data to OSM format and import that
I don't have a lot of experience with this tool myself, but there are some threads on the Unity community forums and the author seems to be very responsive.
Mapbox Unity SDK.
if you are required to use your GIS data you won't be able to use this
open source, so you can always look at the source to see how the transforms are done.
There are many hooks available and several tutorials that demonstrate how to hook into the pipeline and extract features from the incoming data and perform procedural generation of scene geometry (see the tutorials). The team behind it has been very helpful in answering questions.
I've done a bit of work with this tool, and the biggest downside (for me) is that it can only generate a map at runtime, i.e. not in the editor. This is due to licensing restrictions on the data.
Both of the options I mentioned do the transform from GIS coordinates to Unity-space in Unity. Unity runs fully-featured C# code and is capable of building complex geometry. Obviously that takes computational time, so you need to trade off that against performance. You should only need to build the geometry once, however (or at least only once after each edit/modification), not per-frame. In my use I've modeled buildings in SketchUp and imported those into Unity, then used the Mapbox utilities to map their locations to the correct spot (elevation and lat/long coordinates). You could do a similar thing with Maya models.
Some of what you need to do depends on:
you need to dynamically generate the roads every time you launch the program
you are better off building the code into Unity to translate from GIS to Unity scene coordinates and dynamically generate road geometry
you are doing a one-time import of road networks and then modifying that static network.
build a tool to convert from your GIS and generate road geometry as a one-time task
you'll still probably want to use Unity since in includes many utilities to work with vectors and generate geometry
I'm new to SlimDX and working in a legacy system. There's a Direct3D11 device being used, but I'm trying to use CreateSphere from:
SlimDX.Direct3D9.Mesh.CreateSphere(Direct3D9.Device,...)
Is there any way to use CreateMesh with a Direct3D11 device? Casting to Direct3D9.Device is not a valid cast. I don't understand why a newer API would remove a feature as simple as creating a sphere.
The problem with your question is that you're mixing up two very distinct things.
The DirectX API didn't have a CreateSphere method, ever. This was contained in a helper library, DXUT, which was discontinued by Microsoft.
Just write your own or use a pre-made mesh. If you don't know how, you were probably relying too much on the helper libraries anyway.
Of course, your level of understanding is betrayed by the feeling that creating a 3D sphere is somehow a simple task. It's not. There's tons of different ways to generate even 3D primitives that depend a lot on what you're doing, that's probably one of the reasons DXUT was discontinued in the first place. There's many ways to arrange the vertices of a sphere (UV sphere, Icosphere, ...), many ways to index those, many different vertex formats which need to be filled differently based on what you're doing.
It still made sense to have those helper libraries when these problems simply weren't there - before the days of the programmable graphics pipeline. The old fixed pipelines were pre-made for relatively simple tasks, today you have much more flexibility, for some cost in having to understand how some things are done - things like HLSL, texture mapping, light calculations etc.
I'm developing in MonoTouch and I have an issue where a 3D model should be drawn on-screen, making it possible to rotate around and zoom in/out.
Usually, developing things for iOS (apart for the usual weird API kinks) is a breeze. I need an image, I load it and display it with a few lines of code. Same goes for audio, touch events etc. However, when I try to look at 3D stuff, what I get is OpenGL-ES 2.0 which seems unnecessary low-level and far from "plug n play". Weird enough, but what seemed even weirder was that I couldn't find any simple framework to go around it. Am I missing something here? I found Unity3D but that's way more than I need (not to mention the price, and again, learning curve).
Do I really have to invest time in learning the intricacies of 3D rendering when I just want to display a model? Seems OpenGL-ES-1.1 is a bit simpler but may not have the functionality I need (and again, the lack of "1-2-3 this is how it works"-tutorials seems weird to me). Or are my google skills way poorer than I thought?
Sorry if the question implies a vague answer, but summarized I guess my question is "What's the simplest way of displaying/rotating/zooming a 3D model in MonoTouch using OpenGL-ES 1.1/2.0 (preferrably 2.0, but 1.1 is also ok)?"
I got this geat tutorial for OpenGL ES 2.0.
Its not unnecessary low-level IMHO. It enables a broad possible uses, e.g. creating a game and a simple 3D visualization are two different applications which may not need everything the other app has.
Of course it would be great if it would be more like DirectX but you can create the necessary classes by your own in no time.
If you want a non-low-level api, consider using something like Unity3D.
"1-2-3 this is how it works" - http://nehe.gamedev.net/
for mono - http://www.mono-project.com/GtkGLAreaSharp:NeHe (i know not mono touch, but it should help)
OpenGL ES 1.1 should be more than enough.
What kind of effects do you want to add to the models? 1.1 should be fine for texturing and lighting.
This is what I have to do:
To build a CAD-like application that loads a point cloud (i.e. thousands of 3D points representing a 3D object) from file, allows the users to manipulate the points (i.e. change the shape by moving the points), do a lot of calculations the points on the points (e.g. finding the intersection points between lines and surfaces, detect a point is above or under a surface etc., measure the distances between points, or points to surface etc.), and then save the modified points to file.
It also provides basic CAD-like UI features such as zoom in/out, pan the view, rotation the camera etc.
Speed is the major concern.
Instead of writing my own functions for matrix operation and defining my own point/line/surface classes, I would like to use existing libraries/APIs to do the job.
I know WPF, XNA and SlimDX provides the API to do 3D geometric calculations and all of them are finally calling DirectX, but I'm just newbie to all of them. I'm wondering:
Which one (or some other suggestion) could give better performance in speed.
My understanding about DirectX's 3D functions is that it mainly deals with gaming graphics / screen outputs, is it also suitable for data-level calculations(i.e. use the 3D functions to manipulate the point data, calculate the distances etc., but not outputting it on the screen)? By suitable, I mean if I create thousands of DirectX vertexes and mainpulate them, would it be much slower than using my own data types and structures?
Pls correct me if my understanding is wrong.
If I use WPF, do I need to use XNA as well? I'm kind of mixing up these two things.
The application is supposed to run in research lab's PC which doesn't have powerful gaming display card, so does it mean XNA is not preferred?
An suggestion about the technologies should be used for this application?
Thanks!!
========update
To make it clearer, the app will load ~108,000 points in 3D, and every points will form surfaces with other adjacent points, so roughly the the same number of 3D surfaces are involved (I'm not generating them at the same time). I will do a lot of 3D geometric and matrix calculations with the point and surfaces, such as intersection, interpolation, transformation etc. , so the speed of the "calculations" is my major concern. Most of the time I will only draw the final result to the screen and the drawing is mainly lines and points, the speed of "drawing" is not a big concern . so it is not really a graphic-intensive app, but a geometric-calculation-intensive app.
After reading the answer and comments, I think of two options:
store & calculate the data with primitive data-types, and convert data to the WPF/XNA/SlimDX data structures when drawing them on screen, or
use these API's data structures to store, calculation and drawing all those points.
which one is better?
Honestly, if performance is your
primary concern I would go with the
API that gets you closest to the
hardware. Less obfuscation = more
speed. In that case, from the
choices you've provided, SlimDX
is the best option, followed by XNA,
and lastly, WPF.
No, DirectX must use efficient data structures and algorithms. Think about it-- would games that utilize DirectX be able to run at a suitable framerate if all DirectX calculations were inherently slow?
No, WPF and XNA are mutually exclusive. WPF is a framework for creating responsive and intuitive user interfaces. XNA, on the other hand, is a framework for creating games.
Not necessarily. What it actually means is that WPF is not preferred, as WPF will offload a lot of work to compatible video cards. If WPF is unable to find a suitable video card, the CPU will take that work instead, resulting in poor performance.
As I said before, for a graphics-intensive application such as the one you have described, the closer you can get to the hardware is the better. Native DirectX or SlimDX are good options.
Have you considered developing your functionality as a plugin for an existing CAD environment?
AutoCAD, for example, has a very powerful c++ sdk (ObjectARX), it also provides a managed .NET API. You can use c# and WPF to develop your extensions. It has existing geometry libraries you can reuse.
Certainly AutoCAD has its price, but there are alternatives. For example BricsCAD. I'm not sure if BricsCAD provides a .NET api though.
Developing an application from scratch would take weeks if not months.
If I were to develop your functionallity as an AutoCAD plugin it would take me a day.
Consider if you really need to roll out own your own 'CAD' environment.
A few weeks ago, I checked out the limits of XNA. I want to know how much billboards (GPU accelarated) the engine in able to deal with. The result:
Pure XNA: 350k billboards
XNA as rendering context in WPF: 100k billboards
I do not realy know why the engine slow down when rendering to a WinFormHost control. Some debugging shows, that GraphcisDevice.Present()
I'd like to work on a game, but for rapidly prototyping it, I'd like to keep it as simple as possible, so I'd do everything in top-down 2D in GDI+ and WinForms (hey, I like them!), so I can concentrate on the logic and architecture of the game itself.
I thinking about having the whole game logic (server) in one assembly, where the WinForms app would be a client to that game, and if/when the time is right, I'd write a 3D client.
I am tempted to use XNA, but I haven't really looked into it, so I don't know if it won't take too much time getting up to speed - I really don't want to spent much time doing other stuff than the game logic, at least while I have the inspiration. But I wouldn't have to abandon everything and transfer to new platform when transitioning from 2D to 3D.
Another idea is just to get over it and learn XNA/Unity/SDL/something at least to that level so I can make the same 2D version as I could in GDI+, and I won't have to worry about switching frameworks anymore.
Let's just say that the game is the kind where you watch a dude from behind, you run around the gameworld and interact with objects. So the bird's eye perspective could be doable for now.
Thanks.
You should really just bite the bullet and take a look at one of the frameworks you mentioned.
SDL is pretty good, but honestly, if you want to just get down to writing your game, XNA is incredible.
If you are already experienced in C#, you could follow the on-line tutorials, but picking up just a single book on XNA is enough to really get you going.
This too long for a comment but... Your game physics world should pretty much be independent of the type of view you're using to see it. As an example, it's not uncommon for RTS (like say Warcraft III) to offer both a 3D view and a "mini map". If you think about it, Warcraft 1 that was 2D isn't that different from Warcraft 3 (which is fake 3D, but represented using real 3D).
Another example, you're talking about watching some character walking: it's not unlike CounterStrike (well, in CS you are the dude but anyway), where you have both your 3D view and also a minimap. And gameplay aside, I sure can walk around "Dust" (one of the most famous CS map) using only my minimap: I don't need the 3D view to walk around (now of course to aim I can't use the minimap).
In a lot of game the "physics world" is not the same as the "3D world": otherwise people with different configs wouldn't be able to play in a network game.
Another CounterStrike example: I had a really old crappy celeron with a crappy graphic card that was barely enough to run the game, so I modded the game to use "low polygons" models for the characters (this greatly enhanced the rendering speed and hence made the game very playable on my crappy config). And I still could play networked. Why? Because changing the view world doesn't change the physics world.
So the "view" really shouldn't be influencing too much your model because the view is a detail. Now of course you have to somehow decide on what you want: but if the "dude" you mentioned could be followed using a 2D top-down view as well as an isometric view as well as an "FPS-like" 3D view, then by all mean model your "physics" in a way that is completely unrelated to the view. That way you'll be able to start with something simple: 2D view, using pixels (like a CounterStrike or a Warcraft 3 minimap). And later on you can start adding a 3D view.
Now the kind of world you need to use depends on what you want: heck, there are both "2D physics / 3D view games", "3D physics / 2D view games", "2D physics / 2.5D view games" (GIYF if you don't know about the '2.5D' term in videogame development), etc.
My point is: the view is unrelated to the model/physics (once again, otherwise people couldn't be playing networked game of CounterStrike or Warcraft).
I'm not a game programmer, but I know that the difference between modeling physics problems in 2D and 3D is huge.
I agree that it's a good idea to start with 2D, but don't expect to be able to reuse much of that code in the 3D version. 3D is a different animal.