Is there any way or chance that DX12 can be implemented in the game engine?
It's benefits to me are sorely needed in STO, with the amount of spam on screen reducing my top of the line PC to less than smooth game play, particularly in combat vs the new herald NPCs, it seems obvious the devs have already exceeded what the engine is capable of doing.
More draw calls, less CPU usage, better multi core CPU support.
It'd be a win win for everyone.
Would Cryptic even bother? Is it even possible?
Is the game engine so old and custom that it can't actually get DX12 support?
Is it going to be stuck in the past for it's entire life cycle and crumble under it's own weight as devs heap in more visual effects that the engine can't efficiently handle any more?
Is there any way or chance that DX12 can be implemented in the game engine?
It's benefits to me are sorely needed in STO, with the amount of spam on screen reducing my top of the line PC to less than smooth game play, particularly in combat vs the new herald NPCs, it seems obvious the devs have already exceeded what the engine is capable of doing.
More draw calls, less CPU usage, better multi core CPU support.
It'd be a win win for everyone.
Would Cryptic even bother? Is it even possible?
Is the game engine so old and custom that it can't actually get DX12 support?
Is it going to be stuck in the past for it's entire life cycle and crumble under it's own weight as devs heap in more visual effects that the engine can't efficiently handle any more?
While i agree and disagree this is not a Bug but a request you might post it in the General discussion Forum instead to get the intended Player Feedback.
DX12 would be the worst possible Option imho.
They should adapt the Vulkan APi instead and finaly go with native OS-X and Linux Clients going the route of DX12 would be a bad choice as the result would be walling Players into a Windows ECO System that has increasing Hardware demand.
Plus not everyone is going to hae a free Win10 Copy as only Win7 and Win8 Users are going to have it for free, still to many People Play STO on WinXP and even worse just 32bit CPU's.
Vulkan would not care if you run it on 32bit WinXP,Vista, Win7, Win8, Win10, Linux, OS-X, BSD, Android you anme it as Long as the Client is compiled for the platform.
And yes Vulkan as ofcourse DX12 would be this time realy a Gods end not like the promised DX11 eyewash as it totaly diffrent handles the Hardware.
So basicly all the People with High Core Count that now have bad Performance (AMD anyone) would make a hughe leap in Performance even outclassing alot of Intel CPU Five times the cost.
If this is not enough reason to go the route and draw in a greater Playerbase with less issues not forcing them to upgrade Hardware but spending Money on the game instead i don't know how bad a buisness PWE is as you don't let possible refenue untabed.
As far as I'm aware STO hasn't even got a native OpenGL pipeline for Mac? It just runs in a VM or Linux Wine type emulator. While I agree Vulcan would be better portability wise, I'd suggest it'd be perhaps the most unlikely to happen as it's require a completely new render pipeline (but then DX12 might too, not sure)
I see your point about Win10 lock in for DX12. Good point! I guess I can rebutt to that by saying DirectX 11.2 (I think is the version number) will be basically the same DX12 wise, just excluding the new feature sets. That is it'll have all the draw call and CPU cycle saving features but be the most backwards compatible, which perhaps is probably what we actually should be hoping for as a minimum to achieve the goal we're thinking of here.
Probably easiest to get DX11.2 support, given it's "just" a revision of the DX11 code base. (not that it means it'll be a minor task to achieve)
Yeah wasn't sure where to post this, kinda just wanted STO engineers to see it and respond. Not sure they'd see it no matter where I posted, only devs seem to be around here.
As far as I'm aware STO hasn't even got a native OpenGL pipeline for Mac? It just runs in a VM or Linux Wine type emulator. While I agree Vulcan would be better portability wise, I'd suggest it'd be perhaps the most unlikely to happen as it's require a completely new render pipeline (but then DX12 might too, not sure)
I see your point about Win10 lock in for DX12. Good point! I guess I can rebutt to that by saying DirectX 11.2 (I think is the version number) will be basically the same DX12 wise, just excluding the new feature sets. That is it'll have all the draw call and CPU cycle saving features but be the most backwards compatible, which perhaps is probably what we actually should be hoping for as a minimum to achieve the goal we're thinking of here.
Probably easiest to get DX11.2 support, given it's "just" a revision of the DX11 code base. (not that it means it'll be a minor task to achieve)
Yeah wasn't sure where to post this, kinda just wanted STO engineers to see it and respond. Not sure they'd see it no matter where I posted, only devs seem to be around here.
You are right with the Vulkan would Need most of a total rewrite aswell as DX12 would anyway.
On the part of DX11.2 that out allready anyway but does not Change the underlying issues of CPU Overhead you you still stuck in the downward spiral of been shakeld to a singlecore experience no matter how much Cores your CPU has to spare.
Yes the MAC Client ususe Wine and Linux People are encoraged to use the same aswell by Cryptic (Wine not the MAC Client as i know it).
So natural it would be Vulkan to go for no license fee no walled garden and free platform choice to port to.
And on the plus side Vulkan incorporates knowledge and Code from Mantle that allready had a sucseful deployment just that AMD decided to drop it in favor of supporting DX12 and Vulkan at the same time.
It will be very interesting and advantageous for Cryptic to use Vulkan for STO and NW. Not only will they become one of the first, if not the very first, to adapt it but it will also give them [1] better access to the countless Mac and GNU/Linux gamers who refuse to use WinDoze; [2] major free marketing -- the news will no doubt make a wave in the MMOG and gaming industry.
As a Linux gamer, I prefer to play in a native Linux client. Better security than Microsoft's operating system.
On the part of DX11.2 that out allready anyway but does not Change the underlying issues of CPU Overhead you you still stuck in the downward spiral of been shakeld to a singlecore experience no matter how much Cores your CPU has to spare.
I was actually thinking DX11.3, but yeah just checked and no CPU lowering benefits. Oh well.
Moving to DX12 would have the path of least resistance I figure though, DX12 will be slightly similar to DX11 path than starting from scratch for a Vulkan implementation...
Although starting from scratch might actually be a good thing too And come on, who doesn't want a Vulkan powered render engine for STO code named Spock or something!
Unfortunately, the graphics issues are not completely tied into the API - and moving to DX12 or Vulkan, or Mantle wouldn't fix anything without more significant back end improvements.
For instance, the primary reason larger STFs lag (take as an example particularly spammy CCAs) has nothing to do with CPU or GPU usage - if you watch your system during these times, your GPU and CPU usage will go down to almost nothing - your system is waiting on STO's netcode to update what is supposed to be happening on the screen - a more efficient render engine would do nothing to combat this.
The same is true for most of the laggy GUI screens - STO, as many other MMOs, locks the framerate in some circumstances based on the rate that updates can be received and transmitted to the server. Hence on a screen like your Doff or reputation windows where both your client and the server need to update in real time, the framerate, or more precisely your frametime, will be governed primarily by your ping. During a brief stint on a fiber connection, the same rig I run now got double the FPS in GUI screens and crowded areas than it does on DSL or cable.
What lags people on ESD is not the extra rendering of the player characters - if it was simply rendering, NPC-heavy zones like the ground battlezones would be just as choppy - it's the way netcode is implemented in order to sync the clients.
It's simply a fact of STO's netcode that much of the lag can't be fixed no matter what rendering engine they choose. The crashes and graphical glitches could be, but that doesn't require switching to DX12 or any other API, it just requires fixing.
People are jumping on the DX12 and Vulkan bandwagons before we even have anything more than tech demos out - according to some "testing" so far, the R9 290x is "faster" than the Titan X under DX12 - a "fact" which will most likely not hold up under actual gameplay. The point is DX12 will allow more speed in certain circumstances for certain titles, but it really won't be this magic bullet that everyone seems to think it is. There is precisely one AAA game so far that has actually been held back by draw call limits, and AC: Unity wasn't exactly known as a stellar experience anyway.
Remember how Mantle was going to revolutionize gaming graphics a short time back? Yeah, that totally happened. In like 3 games.
Edit: I am not bashing you or flaming you in any way - your thoughts on how MMOs work are the way that most games do in fact work, and are based on what on the surface seems to be a logical method for the game back end. This is the same logic with which novice programmers usually approach MMOs and related netcode. Perfecting netcode for MMOs is something that very few companies have done - and the most perfect examples of it are mostly from the pre- and early-broadband eras. Once the majority of customers started to have broadband with low(er) pings, real optimization basically stopped being a priority and certain things were done simply because there wasn't a real need to economize how much bandwidth was used.
Take an example of torpedoes in STO - (note that I do not actually know how torpedoes actually function in STO, but I am using them to illustrate a point). The most logical way that a random coder off the street would approach a torpedo is to spawn a new object, give it a movement path, have the server actually move and track the object, and sync it to all clients, culminating in a to-hit and damage roll. This is a waste of bandwidth AND server performance. The bandwidth-friendly approach is NOT to spawn the torpedo as an object on the server - the server simply acknowledges that a torpedo was fired, does a to-hit and damage roll, and simply tells all the clients that a torpedo was fired, and whether or not it hit.. The client then does all the animation and pathing for the torpedo on its own. This has the side effect that not all clients will see the torpedo flight in an identical way, but it is incredible efficient. Only the firing and hit status of the torpedo need to be synced, and you have saved a huge amount of bandwidth.
My guess is "hope" keeps people not playing but posting on the forums. For others, its a path of sad realization and closure. Grieving takes time. The worst "haters" here love the game, or did at some point.
Unfortunately, the graphics issues are not completely tied into the API - and moving to DX12 or Vulkan, or Mantle wouldn't fix anything without more significant back end improvements.
For instance, the primary reason larger STFs lag (take as an example particularly spammy CCAs) has nothing to do with CPU or GPU usage - if you watch your system during these times, your GPU and CPU usage will go down to almost nothing - your system is waiting on STO's netcode to update what is supposed to be happening on the screen - a more efficient render engine would do nothing to combat this.
The same is true for most of the laggy GUI screens - STO, as many other MMOs, locks the framerate in some circumstances based on the rate that updates can be received and transmitted to the server. Hence on a screen like your Doff or reputation windows where both your client and the server need to update in real time, the framerate, or more precisely your frametime, will be governed primarily by your ping. During a brief stint on a fiber connection, the same rig I run now got double the FPS in GUI screens and crowded areas than it does on DSL or cable.
What lags people on ESD is not the extra rendering of the player characters - if it was simply rendering, NPC-heavy zones like the ground battlezones would be just as choppy - it's the way netcode is implemented in order to sync the clients.
It's simply a fact of STO's netcode that much of the lag can't be fixed no matter what rendering engine they choose. The crashes and graphical glitches could be, but that doesn't require switching to DX12 or any other API, it just requires fixing.
People are jumping on the DX12 and Vulkan bandwagons before we even have anything more than tech demos out - according to some "testing" so far, the R9 290x is "faster" than the Titan X under DX12 - a "fact" which will most likely not hold up under actual gameplay. The point is DX12 will allow more speed in certain circumstances for certain titles, but it really won't be this magic bullet that everyone seems to think it is. There is precisely one AAA game so far that has actually been held back by draw call limits, and AC: Unity wasn't exactly known as a stellar experience anyway.
Remember how Mantle was going to revolutionize gaming graphics a short time back? Yeah, that totally happened. In like 3 games.
Edit: I am not bashing you or flaming you in any way - your thoughts on how MMOs work are the way that most games do in fact work, and are based on what on the surface seems to be a logical method for the game back end. This is the same logic with which novice programmers usually approach MMOs and related netcode. Perfecting netcode for MMOs is something that very few companies have done - and the most perfect examples of it are mostly from the pre- and early-broadband eras. Once the majority of customers started to have broadband with low(er) pings, real optimization basically stopped being a priority and certain things were done simply because there wasn't a real need to economize how much bandwidth was used.
Take an example of torpedoes in STO - (note that I do not actually know how torpedoes actually function in STO, but I am using them to illustrate a point). The most logical way that a random coder off the street would approach a torpedo is to spawn a new object, give it a movement path, have the server actually move and track the object, and sync it to all clients, culminating in a to-hit and damage roll. This is a waste of bandwidth AND server performance. The bandwidth-friendly approach is NOT to spawn the torpedo as an object on the server - the server simply acknowledges that a torpedo was fired, does a to-hit and damage roll, and simply tells all the clients that a torpedo was fired, and whether or not it hit.. The client then does all the animation and pathing for the torpedo on its own. This has the side effect that not all clients will see the torpedo flight in an identical way, but it is incredible efficient. Only the firing and hit status of the torpedo need to be synced, and you have saved a huge amount of bandwidth.
Not disagreeing with you on most of your part as you basicly write exact the same in regards to the Problems the Netcode is causing as i did in another post here in the Forum.
Picking up on you Torpedo example you do to book it as Netcode Problem when it clearly is in the End an APi restricted Client issue.
You are totaly correct in saying a every client renders the Torpedo path for itself and the Server only checks if it had been fired and if it hit.
Still the Problem for Players is not that we see diffrent flight animations of Torpedos but that some don't see any Torpedos at all and that is due to a Cryptic Called it Effects Budget limit that can be shown Client side at the same time.
So what is this effects Budget supposed to be?
Besides a fancy and dramatic Name it simply relate to in STO Terms if you have to many flashy things Happening at the same time on your Screen certain effects are not shown anymore.
This is not a Netcode issue but an intentional Client side decision.
Why would someone do this?
Sacrificing fidelity and visual Feedback is not always bad if the game would result in overal bad Performance DX11 and previous titles are Limit to a certain amount of Drawcalls arbitrary removing effects Rendering like in STO is a measure to stay in this drawcall Limit before the game completly saturates the APi and kills your FPS.
I totaly agree with your Point the Netcode had to go a longway and the fact that STO is a Oneshard game with People over the world with at times more then 300ms+ latency diffrence between each other doesmt help either in Group Content.
So removing any further barriers in regards to artifical leadballs the game intentional has put on to circumvent edgecase Scenarios jumping on the Vulkan or maybe even the bad DX12 bandwagon would make a hughe diffrence.
After all the best optimisations are not the ones that People see put those that People notice and feal as natural.
P.S. Summing up your post would relate to STO is not bad a game as Long as you only would Play it on your Private Network (LAN) with a Group of friends.
P.P.S. Mantle runs great but with all vendor locked Technologies it takes Ages until it gets adapted in a meaningfull manner take a look at PhysX if you compare the amount of Titles that use PhysX Hardware to the Mantle titles and divide it down by the time both APi's are available and again by how the Player actualy benefits of it PhysX Hardware happend to be a fail when Mantle actualy was a sucess.
The Question if AMD droped Mantle on their own Terms or if they got pushed by the DX12 Group Members should not be our concern.
Again, if laggy instances were actually using GPU and CPU power you would be completely correct. STO is *not* saturating your CPU with drawcalls. Ever. Yes, I am aware of the effects budget. And it has been modified to allow many more effects on screen at a time - I haven't seen (or haven't not seen...) invisitorps in over a year now.
The point is, STO never saturates the CPU or GPU on even a medium grade system nowadays. With the exception of some poorly optimized effects (Fluidic space being a great example), waiting for sync is what causes most of the perceived graphical lag. There are numerous MMOs now that have transitioned to newer and more efficient graphics engines - in cases where they have robust netcode, the are just as lag free as they were to begin with - in cases where they had TRIBBLE netcode, they are just as framerate limited as they were before.
Changing graphics APIs in STO may fix missing effects for some people, but it will not in any way help with lag in CCA, on ESD, ISA, or in certain GUI windows, or anywhere where the situation has nothing to do with limited CPU or GPU power. Any crowded area which is not hitting you with 100% CPU or GPU usage is not going to help. As a thought experiment, I dropped my CPU core speed down to 2.6gHz from 4.5gHz and ran a CCA. I saw the same average framerate at both speeds, with frequent CPU/GPU usage near 0% at every spike in network traffic. The netcode in STO (in all of Cryptic's games, frankly) is not well done.
PPS Mantle is not a closed standard. AMD offered Mantle, free with no strings attached, to nVidia. nVidia refused - as evidenced by some of the DX12 pretesting, likely because nVidia cards would see no benefit, and in most cases perform more poorly than AMD cards.
Edit: Someone just asked me why the GUIs need to sync so often, and not just when changes are made. Put simply, because the majority of issues with MMO...player malfeasance shall we say.....are related to abusing lag and client sync.
My guess is "hope" keeps people not playing but posting on the forums. For others, its a path of sad realization and closure. Grieving takes time. The worst "haters" here love the game, or did at some point.
Again, if laggy instances were actually using GPU and CPU power you would be completely correct. STO is *not* saturating your CPU with drawcalls. Ever. Yes, I am aware of the effects budget. And it has been modified to allow many more effects on screen at a time - I haven't seen (or haven't not seen...) invisitorps in over a year now.
The point is, STO never saturates the CPU or GPU on even a medium grade system nowadays. With the exception of some poorly optimized effects (Fluidic space being a great example), waiting for sync is what causes most of the perceived graphical lag. There are numerous MMOs now that have transitioned to newer and more efficient graphics engines - in cases where they have robust netcode, the are just as lag free as they were to begin with - in cases where they had TRIBBLE netcode, they are just as framerate limited as they were before.
Changing graphics APIs in STO may fix missing effects for some people, but it will not in any way help with lag in CCA, on ESD, ISA, or in certain GUI windows, or anywhere where the situation has nothing to do with limited CPU or GPU power. Any crowded area which is not hitting you with 100% CPU or GPU usage is not going to help. As a thought experiment, I dropped my CPU core speed down to 2.6gHz from 4.5gHz and ran a CCA. I saw the same average framerate at both speeds, with frequent CPU/GPU usage near 0% at every spike in network traffic. The netcode in STO (in all of Cryptic's games, frankly) is not well done.
PPS Mantle is not a closed standard. AMD offered Mantle, free with no strings attached, to nVidia. nVidia refused - as evidenced by some of the DX12 pretesting, likely because nVidia cards would see no benefit, and in most cases perform more poorly than AMD cards.
Edit: Someone just asked me why the GUIs need to sync so often, and not just when changes are made. Put simply, because the majority of issues with MMO...player malfeasance shall we say.....are related to abusing lag and client sync.
I see where you going to and i can't disagree with your arguments still to put that much weight as cause of the issues just on the netcode is a Little short sighted.
So i use a more flowery discription to the Problem most Players experience.
Running STO is for your System about the same as a Person transporting a bucket of Water with lots of holes from a well to a burning house.
The Speed in wich the Person is hauling the bucket is the latency the longer it takes the more water does drip out of the bucket and the less water you have to quench the flames.
This totaly fits your assesment as the to delivered Data is timesensitive and the longer the delay the more bad things happen but still the bucket is leaking water.
Going abit further with this and looking into the holes in the bucket.
These holes are all the time sensitive data that has to be tracked and Need Server Client communication the more dots, buffs, debuffs, attacks, powers are initiated in the same timespace the more holes the bucket has.
This also fits your Explanation of more lag is to be expected in a high DPS power spaming environment.
Now we add another concept to it we add extra People that do haul the bucket this still does not Change the Speed at wich the water drips out of the bucket and the rate at wich the house burns down but reduces the efficency and load that is put on the individual hauling the bucket resulting in a smoother and workflow and even allowing extra Tasks of opertunity at the same time like patchup some of the holes on the go, or in STO Terms Display all Effects or even some extra.
And that's what a low Level APi like Vulkan/DX12/Mantle could do.
Easy to see every single of those three steps would help the game to improve but None would be enough to be a solution for all of the issues the game Encounters.
The first would be to start heavily optimizing the netcode something that Needs alot of Expertise and care to not break anything critical and make it even worse.
Plus it is scary as hell to fix something that is broken for so Long if you know you don't realy have a marging of error without negativly impacting possible Revenue.
The second is more simple but not less scare and work intensive starting to Change game mechanics to favor calculated once and be done mechanics over the mess of all the dynamic procs and effect tracking is nowflooding the game recently.
Cryptic created a Monster with all those Special consols and specialistaion and Reputation traits and powers and they can only get rid of it if they reevaluate the rulez it operates on.
The third represents the Option to make use of hidden System resources you yourself mentioned STO does not Task the System to it's fullest but exactly this is the Problem it should but it can't because the current APi is Holding it back.
So all of these three Major bullet Points are possible ways to fix th Situation but only all three will make the most out of it.
And as soon as you relaize it you see Cryptic has it's balls on the Wall with that because no matter what they do it is alot of work they have to handle.
DX11 barely works for STO/Neverwinter (it's basically the first troubleshooting option that people bring up if you're having any sort of graphic or crash problem for a reason). Maybe they should fix DX11 first (or just put everyone on DX9 like they should) before trying out DX12.
What an excellent thread! I've read all posts avidly. And learned a lot from you guys.
Just to comment on the DX11 issue, re: switching to DX9, it doesn't help either. I experience the lag you are talking about, however its the issues the code seems to have with Nvidia drivers that's causing me grief at the moment. 4 times last night the game mishandled (I'm assuming) the drivers and caused the driver to stop, temporarily freezing my desktop. And that was on DX9 which I'd switched to in an effort to reduce the driver hangs.
DX11 barely works for STO/Neverwinter (it's basically the first troubleshooting option that people bring up if you're having any sort of graphic or crash problem for a reason). Maybe they should fix DX11 first (or just put everyone on DX9 like they should) before trying out DX12.
Actualy this would hold true and make sense if DX11 would be something totaly diffrent to DX9.
DX11 is not that much more then a Version of DX10 with a wider Feature Set MS decided to call it DX11 for Marketing reasons or to seperate it from the taint of Vista a lot of uninformed Users had.
As DX10 is also just a step up from DX9 DX12/Vulkan is not.
Alot of dirty hacks that are necessary to have a reasonable performing application are more often then not causing Driver issues.
As Developers do not have the Option to fix those issues it ultimatley falls to the Chip manufacturer and their Driver Team to have application specific Hotfixes sort off.
So where am i going with that?
Those so called dirty hacks could be ones that are not necessary anymore with DX12/Vulkan/Mantle as certain Problems developers have to Workaround do not exist anymore.
Also a lowlevel APi like Vulkan allows a more direct control of the System opening up bug solving that before could only be covered by the Chip Vendor and their Driver Team.
Does this mean as soon as Cryptic adapts DX12 or better Vulkan all things are fluffy?
Ofcourse not those new APi's are new and only a limited amount of Tools exist as of now that i know help to Transition into those new APi.
In case of Vulkan diffent then the case it is with DX12 it is to an extend still OpenGL with Mantle extensions (As i write this i allready notice the mob that wants to throw rocks at me for saying that) atleast in a broder sense.
Besides all the Advantages that an Open APi like OpenGL has compared to a platform closed like DirectX it brings the Mantle part with it that had been allready a sucesful deployment .
Coming to a conclusion in adopting Vulkan or (God forbid) DX12 Cryptic could fix issues once and for all instead bandaiding the DX11 Version until the next Feature improvment breaks it again.
And in Addition to that with Vulkan atleast in Theorie you nostalgic People could still Keep playing on your legacy OS without the Need for a Win10 Migration.
Is there any way or chance that DX12 can be implemented in the game engine?
It's benefits to me are sorely needed in STO, with the amount of spam on screen reducing my top of the line PC to less than smooth game play, particularly in combat vs the new herald NPCs, it seems obvious the devs have already exceeded what the engine is capable of doing.
More draw calls, less CPU usage, better multi core CPU support.
It'd be a win win for everyone.
Would Cryptic even bother? Is it even possible?
Is the game engine so old and custom that it can't actually get DX12 support?
Is it going to be stuck in the past for it's entire life cycle and crumble under it's own weight as devs heap in more visual effects that the engine can't efficiently handle any more?
I totally agree this game needs severe performance enhancements.
UI lag is still considerable, i dont get why the UI has to be drawn every single damn frame. Really? let the UI run on 30 fps instead so it takes less CPU resources.
Im afraid DX12 wont give much improvements tho unless they do it properly. its not like DX11 give much performance benefits over DX9, not even cpu wise
Unfortunately, the graphics issues are not completely tied into the API - and moving to DX12 or Vulkan, or Mantle wouldn't fix anything without more significant back end improvements.
For instance, the primary reason larger STFs lag (take as an example particularly spammy CCAs) has nothing to do with CPU or GPU usage - if you watch your system during these times, your GPU and CPU usage will go down to almost nothing - your system is waiting on STO's netcode to update what is supposed to be happening on the screen - a more efficient render engine would do nothing to combat this.
The same is true for most of the laggy GUI screens - STO, as many other MMOs, locks the framerate in some circumstances based on the rate that updates can be received and transmitted to the server. Hence on a screen like your Doff or reputation windows where both your client and the server need to update in real time, the framerate, or more precisely your frametime, will be governed primarily by your ping. During a brief stint on a fiber connection, the same rig I run now got double the FPS in GUI screens and crowded areas than it does on DSL or cable.
What lags people on ESD is not the extra rendering of the player characters - if it was simply rendering, NPC-heavy zones like the ground battlezones would be just as choppy - it's the way netcode is implemented in order to sync the clients.
It's simply a fact of STO's netcode that much of the lag can't be fixed no matter what rendering engine they choose. The crashes and graphical glitches could be, but that doesn't require switching to DX12 or any other API, it just requires fixing.
People are jumping on the DX12 and Vulkan bandwagons before we even have anything more than tech demos out - according to some "testing" so far, the R9 290x is "faster" than the Titan X under DX12 - a "fact" which will most likely not hold up under actual gameplay. The point is DX12 will allow more speed in certain circumstances for certain titles, but it really won't be this magic bullet that everyone seems to think it is. There is precisely one AAA game so far that has actually been held back by draw call limits, and AC: Unity wasn't exactly known as a stellar experience anyway.
Remember how Mantle was going to revolutionize gaming graphics a short time back? Yeah, that totally happened. In like 3 games.
Edit: I am not bashing you or flaming you in any way - your thoughts on how MMOs work are the way that most games do in fact work, and are based on what on the surface seems to be a logical method for the game back end. This is the same logic with which novice programmers usually approach MMOs and related netcode. Perfecting netcode for MMOs is something that very few companies have done - and the most perfect examples of it are mostly from the pre- and early-broadband eras. Once the majority of customers started to have broadband with low(er) pings, real optimization basically stopped being a priority and certain things were done simply because there wasn't a real need to economize how much bandwidth was used.
Take an example of torpedoes in STO - (note that I do not actually know how torpedoes actually function in STO, but I am using them to illustrate a point). The most logical way that a random coder off the street would approach a torpedo is to spawn a new object, give it a movement path, have the server actually move and track the object, and sync it to all clients, culminating in a to-hit and damage roll. This is a waste of bandwidth AND server performance. The bandwidth-friendly approach is NOT to spawn the torpedo as an object on the server - the server simply acknowledges that a torpedo was fired, does a to-hit and damage roll, and simply tells all the clients that a torpedo was fired, and whether or not it hit.. The client then does all the animation and pathing for the torpedo on its own. This has the side effect that not all clients will see the torpedo flight in an identical way, but it is incredible efficient. Only the firing and hit status of the torpedo need to be synced, and you have saved a huge amount of bandwidth.
Its simply not true. Pull out your network cable, introduce artificial lag or packet loss, game will still render at max obtainable fps.
You're forgetting to take into account how programs work on a lower level. When you monitor your CPU usage you probably do not take into account most game's rendering thread is bound to 1 cpu only.
The game always maxed out either your GPU or 1 cpu core. (25 for quad, 50 for dual core) As simple as that. If not something might be seriously wrong with your config.
The game DOES use more cores, but only if there is stuff to do that is multi threaded by default.
It is an 'issue' with about all the MMO's ive played, most are heavily bound to 1 cpu core.
Yeah there is a difference between network lag and render engine lag (ie fps drops).
When you get "server not responding" messages the network has completely dropped momentarily but you can still rotate your camera about your ship, open inventory etc as you wait. Graphics rendering is still happening separately.
That UI frame rate hit is a good example of what could be improved. Open inventory, doff and fleet windows and you'll get some nice FPS hits. Could just be how their engine handles it, I dunno.
As mentioned there's the artificial draw call limit STO's engines apparently has. IE when there's too much on screen the engine starts culling out the least important things. Crystaline queue is a good example, too much going on and shards can disappear, beams from your ship can sometimes not be seen, or invisi torps etc.
DX12/Vulcan would specifically remove (or substantially increase) any such limitations, and lowering CPU overheads would allow lower end systems (probably most of the user base) to get more performance from the game.
Splitting out the graphics driver onto multiple cores from the main engine thread would also free up the engine thread to do other things, faster, separate to graphics rendering. So kind of a double benefit.
I guess it most likely won't happen though. Integrating DX12/Vulcan requires directly talking with GPU's, I highly doubt Cryptics engineering team has the expertise and man power to move from a high level API to a low one.
Yeah there is a difference between network lag and render engine lag (ie fps drops).
When you get "server not responding" messages the network has completely dropped momentarily but you can still rotate your camera about your ship, open inventory etc as you wait. Graphics rendering is still happening separately.
That UI frame rate hit is a good example of what could be improved. Open inventory, doff and fleet windows and you'll get some nice FPS hits. Could just be how their engine handles it, I dunno.
As mentioned there's the artificial draw call limit STO's engines apparently has. IE when there's too much on screen the engine starts culling out the least important things. Crystaline queue is a good example, too much going on and shards can disappear, beams from your ship can sometimes not be seen, or invisi torps etc.
DX12/Vulcan would specifically remove (or substantially increase) any such limitations, and lowering CPU overheads would allow lower end systems (probably most of the user base) to get more performance from the game.
Splitting out the graphics driver onto multiple cores from the main engine thread would also free up the engine thread to do other things, faster, separate to graphics rendering. So kind of a double benefit.
I guess it most likely won't happen though. Integrating DX12/Vulcan requires directly talking with GPU's, I highly doubt Cryptics engineering team has the expertise and man power to move from a high level API to a low one.
Yeah they do not have the expertise. DX11 implementation has been around for years, yet it does not give any decent performance boost, nor any new fancy effects.
one of the major resource hogs is indeed the cpu.
One solution could be to not render the ui every single frame, but every other frame instead.
Or shift the workload to a seperate thread.
As for disappearing graphic effects, it really looks like an artificial limitation, one they have never ever acknowledged unfortunately. Any dev that had a any clue as to how the engine works pretty much left sto ages ago.
The DX11 branch has been added to STO because of Neverwinter iirc a older DEV comment.
And for Neverwinter it was supposed not to improve Performance or add flashy GFX but for tesselation.
Feal free to correct me if i am wrong.
Still alot of People do tend to miss the Point that DX11 is not supposed to improve Performance but it could as it simpler and less intensive to realize certain results in DX11 compared to DX9.
Just to add to this discussion, DX11 does have performance improvements if coded to use them. If not then it won't do much over DX9.
DX11 implementation in both Cryptics flagship titles is woeful. Broken and bugged. I've used multiple gfx cards, and multiple drivers, all result in the same graphics corruption and crashing either in a short time or a longer time (30mins...). It does however make the game look a lot better (for the few mins it works) before corruption of textures sets in.
DX12 would be a benefit if issues such as netcode were addressed at the same time. Adding to that the need to address the underlying engine issues with rendering the UI and ingame assets like weapons fire and audio streams and the game could move forward and work pretty well. Currently there are indie games which feel more polished and have less issues in Early Access which is worrying.
On the DX12 Titan vs R9 290X benches, yup the Titan will lose. It'll still lose after DX12 ships as Nvidia are behind on compute performance vs AMD. Until they catch that up AMD will continue to outstrip them performance wise. The only games I see Nvidia being the better card to have are Nvidia sponsored games, mostly because they literally attempt to block the AMD cards working efficiently with them despite their claims in the press that they don't. Watch_Dogs was a good example of this, but there are others too.
TBH I'd really like Nvidia to just get off their proprietary horse and start to work with AMD more in support of the programs instead of just a nonsensical war to drive costs up. The whole G-Sync vs Freesync argument will fall on the side of Freesync, primarily because of ease of use. There's no extra hardware for the monitors, they'll end up cheaper than G-Sync and technically don't lock you into a gfx card choice, whereas G-Sync does. Eventually Nvidia will support Freesync because it makes financial sense to do so, but not before they've lost money over G-Sync. There are many situations where they have gone proprietary (CUDA... PhysX...) and it hasn't worked out for them, yet they still push that agenda with morbid determination, I just don't get it.
Chris Robert's on SC:
"You don't have to do something again and again and again repetitive that doesn't have much challange, that's just a general good gameplay thing."
Read the whole thread and there are some of the statements about the performance that I just can't relate to.
For me, things like Crystalline Entity is unplayable on DX9 due to stutter and single digit framerates, but going with DX11 I get completely smooth framerate everywhere in STO. Of course the game is slightly more unstable in DX11, but that seems to be a general problem with games that weren't originally written for it.
As for the UI lagging the game: Yes, the framerate drops, but I don't get the lag except for severe server lag. Other games like SWTOR and Tera completely freeze when opening and closing any UI elements, and both run a lot smoother at much higher framerates than STO. I don't know much about coding graphics, but to me it looks like a general problem with putting 2D elements on top of a 3D environment, where those elements require a different registration of the mouse movements compared to everything around them.
For me, all the lag I have comes from server (connection) lag. I'm on 50 Mbit fibre, have a 5 Mbit cap on the transatlantic cable, but quite often the server connection while playing runs at 0-50 kbps and I've yet, in 3 years, seen it ever get above 1 Mbit. Only when the loader is downloading patches does it ever reach sensible connection speeds, but as soon as I get into the game it's like the servers run on dial-up. The game has phenomenal lag tolerance in that respect, but the rubber banding and unresponsiveness isn't necessarily all down to the netcode. Quite often it feels more like the servers simply aren't capable of pushing out data fast enough to support millions of players on one shard, instanced or not.
Coding for DX12 would however lock the game to Windows 8 (yuck) and 10, which means it'd maybe benefit a minority of STO players. We already have the issues with the DX11 support actually making things worse for some people, and a lot of players still being on Windows XP and on GPUs incapable of anything higher than DX9.
Also: All the major MMOs have performance problems, areas where the FPS drops to nothing, UI lag, and random unexplainable graphic problems that only hit a few people. It comes from the nature of MMO design, and the graphics API cannot do anything to prevent that.
Comments
While i agree and disagree this is not a Bug but a request you might post it in the General discussion Forum instead to get the intended Player Feedback.
DX12 would be the worst possible Option imho.
They should adapt the Vulkan APi instead and finaly go with native OS-X and Linux Clients going the route of DX12 would be a bad choice as the result would be walling Players into a Windows ECO System that has increasing Hardware demand.
Plus not everyone is going to hae a free Win10 Copy as only Win7 and Win8 Users are going to have it for free, still to many People Play STO on WinXP and even worse just 32bit CPU's.
Vulkan would not care if you run it on 32bit WinXP,Vista, Win7, Win8, Win10, Linux, OS-X, BSD, Android you anme it as Long as the Client is compiled for the platform.
And yes Vulkan as ofcourse DX12 would be this time realy a Gods end not like the promised DX11 eyewash as it totaly diffrent handles the Hardware.
So basicly all the People with High Core Count that now have bad Performance (AMD anyone) would make a hughe leap in Performance even outclassing alot of Intel CPU Five times the cost.
If this is not enough reason to go the route and draw in a greater Playerbase with less issues not forcing them to upgrade Hardware but spending Money on the game instead i don't know how bad a buisness PWE is as you don't let possible refenue untabed.
I see your point about Win10 lock in for DX12. Good point! I guess I can rebutt to that by saying DirectX 11.2 (I think is the version number) will be basically the same DX12 wise, just excluding the new feature sets. That is it'll have all the draw call and CPU cycle saving features but be the most backwards compatible, which perhaps is probably what we actually should be hoping for as a minimum to achieve the goal we're thinking of here.
Probably easiest to get DX11.2 support, given it's "just" a revision of the DX11 code base. (not that it means it'll be a minor task to achieve)
Yeah wasn't sure where to post this, kinda just wanted STO engineers to see it and respond. Not sure they'd see it no matter where I posted, only devs seem to be around here.
You are right with the Vulkan would Need most of a total rewrite aswell as DX12 would anyway.
On the part of DX11.2 that out allready anyway but does not Change the underlying issues of CPU Overhead you you still stuck in the downward spiral of been shakeld to a singlecore experience no matter how much Cores your CPU has to spare.
Yes the MAC Client ususe Wine and Linux People are encoraged to use the same aswell by Cryptic (Wine not the MAC Client as i know it).
So natural it would be Vulkan to go for no license fee no walled garden and free platform choice to port to.
And on the plus side Vulkan incorporates knowledge and Code from Mantle that allready had a sucseful deployment just that AMD decided to drop it in favor of supporting DX12 and Vulkan at the same time.
As a Linux gamer, I prefer to play in a native Linux client. Better security than Microsoft's operating system.
[ Blog: YOOki Chronicles ]
[ Verify me: Keybase | OpenPGP ]
"Someone should do a story where a Horta meets a Vorta. And then they find a portal to Vorta-Vor and become immortal. Sorta." ~Christopher L. Bennett
I was actually thinking DX11.3, but yeah just checked and no CPU lowering benefits. Oh well.
Moving to DX12 would have the path of least resistance I figure though, DX12 will be slightly similar to DX11 path than starting from scratch for a Vulkan implementation...
Although starting from scratch might actually be a good thing too And come on, who doesn't want a Vulkan powered render engine for STO code named Spock or something!
edit: mixed up my Vulcans lol
For instance, the primary reason larger STFs lag (take as an example particularly spammy CCAs) has nothing to do with CPU or GPU usage - if you watch your system during these times, your GPU and CPU usage will go down to almost nothing - your system is waiting on STO's netcode to update what is supposed to be happening on the screen - a more efficient render engine would do nothing to combat this.
The same is true for most of the laggy GUI screens - STO, as many other MMOs, locks the framerate in some circumstances based on the rate that updates can be received and transmitted to the server. Hence on a screen like your Doff or reputation windows where both your client and the server need to update in real time, the framerate, or more precisely your frametime, will be governed primarily by your ping. During a brief stint on a fiber connection, the same rig I run now got double the FPS in GUI screens and crowded areas than it does on DSL or cable.
What lags people on ESD is not the extra rendering of the player characters - if it was simply rendering, NPC-heavy zones like the ground battlezones would be just as choppy - it's the way netcode is implemented in order to sync the clients.
It's simply a fact of STO's netcode that much of the lag can't be fixed no matter what rendering engine they choose. The crashes and graphical glitches could be, but that doesn't require switching to DX12 or any other API, it just requires fixing.
People are jumping on the DX12 and Vulkan bandwagons before we even have anything more than tech demos out - according to some "testing" so far, the R9 290x is "faster" than the Titan X under DX12 - a "fact" which will most likely not hold up under actual gameplay. The point is DX12 will allow more speed in certain circumstances for certain titles, but it really won't be this magic bullet that everyone seems to think it is. There is precisely one AAA game so far that has actually been held back by draw call limits, and AC: Unity wasn't exactly known as a stellar experience anyway.
Remember how Mantle was going to revolutionize gaming graphics a short time back? Yeah, that totally happened. In like 3 games.
Edit: I am not bashing you or flaming you in any way - your thoughts on how MMOs work are the way that most games do in fact work, and are based on what on the surface seems to be a logical method for the game back end. This is the same logic with which novice programmers usually approach MMOs and related netcode. Perfecting netcode for MMOs is something that very few companies have done - and the most perfect examples of it are mostly from the pre- and early-broadband eras. Once the majority of customers started to have broadband with low(er) pings, real optimization basically stopped being a priority and certain things were done simply because there wasn't a real need to economize how much bandwidth was used.
Take an example of torpedoes in STO - (note that I do not actually know how torpedoes actually function in STO, but I am using them to illustrate a point). The most logical way that a random coder off the street would approach a torpedo is to spawn a new object, give it a movement path, have the server actually move and track the object, and sync it to all clients, culminating in a to-hit and damage roll. This is a waste of bandwidth AND server performance. The bandwidth-friendly approach is NOT to spawn the torpedo as an object on the server - the server simply acknowledges that a torpedo was fired, does a to-hit and damage roll, and simply tells all the clients that a torpedo was fired, and whether or not it hit.. The client then does all the animation and pathing for the torpedo on its own. This has the side effect that not all clients will see the torpedo flight in an identical way, but it is incredible efficient. Only the firing and hit status of the torpedo need to be synced, and you have saved a huge amount of bandwidth.
Not disagreeing with you on most of your part as you basicly write exact the same in regards to the Problems the Netcode is causing as i did in another post here in the Forum.
Picking up on you Torpedo example you do to book it as Netcode Problem when it clearly is in the End an APi restricted Client issue.
You are totaly correct in saying a every client renders the Torpedo path for itself and the Server only checks if it had been fired and if it hit.
Still the Problem for Players is not that we see diffrent flight animations of Torpedos but that some don't see any Torpedos at all and that is due to a Cryptic Called it Effects Budget limit that can be shown Client side at the same time.
So what is this effects Budget supposed to be?
Besides a fancy and dramatic Name it simply relate to in STO Terms if you have to many flashy things Happening at the same time on your Screen certain effects are not shown anymore.
This is not a Netcode issue but an intentional Client side decision.
Why would someone do this?
Sacrificing fidelity and visual Feedback is not always bad if the game would result in overal bad Performance DX11 and previous titles are Limit to a certain amount of Drawcalls arbitrary removing effects Rendering like in STO is a measure to stay in this drawcall Limit before the game completly saturates the APi and kills your FPS.
I totaly agree with your Point the Netcode had to go a longway and the fact that STO is a Oneshard game with People over the world with at times more then 300ms+ latency diffrence between each other doesmt help either in Group Content.
So removing any further barriers in regards to artifical leadballs the game intentional has put on to circumvent edgecase Scenarios jumping on the Vulkan or maybe even the bad DX12 bandwagon would make a hughe diffrence.
After all the best optimisations are not the ones that People see put those that People notice and feal as natural.
P.S. Summing up your post would relate to STO is not bad a game as Long as you only would Play it on your Private Network (LAN) with a Group of friends.
P.P.S. Mantle runs great but with all vendor locked Technologies it takes Ages until it gets adapted in a meaningfull manner take a look at PhysX if you compare the amount of Titles that use PhysX Hardware to the Mantle titles and divide it down by the time both APi's are available and again by how the Player actualy benefits of it PhysX Hardware happend to be a fail when Mantle actualy was a sucess.
The Question if AMD droped Mantle on their own Terms or if they got pushed by the DX12 Group Members should not be our concern.
The point is, STO never saturates the CPU or GPU on even a medium grade system nowadays. With the exception of some poorly optimized effects (Fluidic space being a great example), waiting for sync is what causes most of the perceived graphical lag. There are numerous MMOs now that have transitioned to newer and more efficient graphics engines - in cases where they have robust netcode, the are just as lag free as they were to begin with - in cases where they had TRIBBLE netcode, they are just as framerate limited as they were before.
Changing graphics APIs in STO may fix missing effects for some people, but it will not in any way help with lag in CCA, on ESD, ISA, or in certain GUI windows, or anywhere where the situation has nothing to do with limited CPU or GPU power. Any crowded area which is not hitting you with 100% CPU or GPU usage is not going to help. As a thought experiment, I dropped my CPU core speed down to 2.6gHz from 4.5gHz and ran a CCA. I saw the same average framerate at both speeds, with frequent CPU/GPU usage near 0% at every spike in network traffic. The netcode in STO (in all of Cryptic's games, frankly) is not well done.
PPS Mantle is not a closed standard. AMD offered Mantle, free with no strings attached, to nVidia. nVidia refused - as evidenced by some of the DX12 pretesting, likely because nVidia cards would see no benefit, and in most cases perform more poorly than AMD cards.
Edit: Someone just asked me why the GUIs need to sync so often, and not just when changes are made. Put simply, because the majority of issues with MMO...player malfeasance shall we say.....are related to abusing lag and client sync.
I see where you going to and i can't disagree with your arguments still to put that much weight as cause of the issues just on the netcode is a Little short sighted.
So i use a more flowery discription to the Problem most Players experience.
Running STO is for your System about the same as a Person transporting a bucket of Water with lots of holes from a well to a burning house.
The Speed in wich the Person is hauling the bucket is the latency the longer it takes the more water does drip out of the bucket and the less water you have to quench the flames.
This totaly fits your assesment as the to delivered Data is timesensitive and the longer the delay the more bad things happen but still the bucket is leaking water.
Going abit further with this and looking into the holes in the bucket.
These holes are all the time sensitive data that has to be tracked and Need Server Client communication the more dots, buffs, debuffs, attacks, powers are initiated in the same timespace the more holes the bucket has.
This also fits your Explanation of more lag is to be expected in a high DPS power spaming environment.
Now we add another concept to it we add extra People that do haul the bucket this still does not Change the Speed at wich the water drips out of the bucket and the rate at wich the house burns down but reduces the efficency and load that is put on the individual hauling the bucket resulting in a smoother and workflow and even allowing extra Tasks of opertunity at the same time like patchup some of the holes on the go, or in STO Terms Display all Effects or even some extra.
And that's what a low Level APi like Vulkan/DX12/Mantle could do.
Easy to see every single of those three steps would help the game to improve but None would be enough to be a solution for all of the issues the game Encounters.
The first would be to start heavily optimizing the netcode something that Needs alot of Expertise and care to not break anything critical and make it even worse.
Plus it is scary as hell to fix something that is broken for so Long if you know you don't realy have a marging of error without negativly impacting possible Revenue.
The second is more simple but not less scare and work intensive starting to Change game mechanics to favor calculated once and be done mechanics over the mess of all the dynamic procs and effect tracking is nowflooding the game recently.
Cryptic created a Monster with all those Special consols and specialistaion and Reputation traits and powers and they can only get rid of it if they reevaluate the rulez it operates on.
The third represents the Option to make use of hidden System resources you yourself mentioned STO does not Task the System to it's fullest but exactly this is the Problem it should but it can't because the current APi is Holding it back.
So all of these three Major bullet Points are possible ways to fix th Situation but only all three will make the most out of it.
And as soon as you relaize it you see Cryptic has it's balls on the Wall with that because no matter what they do it is alot of work they have to handle.
Razira's Primus Database Page
Get the Forums Enhancement Extension!
Just to comment on the DX11 issue, re: switching to DX9, it doesn't help either. I experience the lag you are talking about, however its the issues the code seems to have with Nvidia drivers that's causing me grief at the moment. 4 times last night the game mishandled (I'm assuming) the drivers and caused the driver to stop, temporarily freezing my desktop. And that was on DX9 which I'd switched to in an effort to reduce the driver hangs.
Actualy this would hold true and make sense if DX11 would be something totaly diffrent to DX9.
DX11 is not that much more then a Version of DX10 with a wider Feature Set MS decided to call it DX11 for Marketing reasons or to seperate it from the taint of Vista a lot of uninformed Users had.
As DX10 is also just a step up from DX9 DX12/Vulkan is not.
Alot of dirty hacks that are necessary to have a reasonable performing application are more often then not causing Driver issues.
As Developers do not have the Option to fix those issues it ultimatley falls to the Chip manufacturer and their Driver Team to have application specific Hotfixes sort off.
So where am i going with that?
Those so called dirty hacks could be ones that are not necessary anymore with DX12/Vulkan/Mantle as certain Problems developers have to Workaround do not exist anymore.
Also a lowlevel APi like Vulkan allows a more direct control of the System opening up bug solving that before could only be covered by the Chip Vendor and their Driver Team.
Does this mean as soon as Cryptic adapts DX12 or better Vulkan all things are fluffy?
Ofcourse not those new APi's are new and only a limited amount of Tools exist as of now that i know help to Transition into those new APi.
In case of Vulkan diffent then the case it is with DX12 it is to an extend still OpenGL with Mantle extensions (As i write this i allready notice the mob that wants to throw rocks at me for saying that) atleast in a broder sense.
Besides all the Advantages that an Open APi like OpenGL has compared to a platform closed like DirectX it brings the Mantle part with it that had been allready a sucesful deployment .
Coming to a conclusion in adopting Vulkan or (God forbid) DX12 Cryptic could fix issues once and for all instead bandaiding the DX11 Version until the next Feature improvment breaks it again.
And in Addition to that with Vulkan atleast in Theorie you nostalgic People could still Keep playing on your legacy OS without the Need for a Win10 Migration.
I totally agree this game needs severe performance enhancements.
UI lag is still considerable, i dont get why the UI has to be drawn every single damn frame. Really? let the UI run on 30 fps instead so it takes less CPU resources.
Im afraid DX12 wont give much improvements tho unless they do it properly. its not like DX11 give much performance benefits over DX9, not even cpu wise
Its simply not true. Pull out your network cable, introduce artificial lag or packet loss, game will still render at max obtainable fps.
You're forgetting to take into account how programs work on a lower level. When you monitor your CPU usage you probably do not take into account most game's rendering thread is bound to 1 cpu only.
The game always maxed out either your GPU or 1 cpu core. (25 for quad, 50 for dual core) As simple as that. If not something might be seriously wrong with your config.
The game DOES use more cores, but only if there is stuff to do that is multi threaded by default.
It is an 'issue' with about all the MMO's ive played, most are heavily bound to 1 cpu core.
When you get "server not responding" messages the network has completely dropped momentarily but you can still rotate your camera about your ship, open inventory etc as you wait. Graphics rendering is still happening separately.
That UI frame rate hit is a good example of what could be improved. Open inventory, doff and fleet windows and you'll get some nice FPS hits. Could just be how their engine handles it, I dunno.
As mentioned there's the artificial draw call limit STO's engines apparently has. IE when there's too much on screen the engine starts culling out the least important things. Crystaline queue is a good example, too much going on and shards can disappear, beams from your ship can sometimes not be seen, or invisi torps etc.
DX12/Vulcan would specifically remove (or substantially increase) any such limitations, and lowering CPU overheads would allow lower end systems (probably most of the user base) to get more performance from the game.
Splitting out the graphics driver onto multiple cores from the main engine thread would also free up the engine thread to do other things, faster, separate to graphics rendering. So kind of a double benefit.
I guess it most likely won't happen though. Integrating DX12/Vulcan requires directly talking with GPU's, I highly doubt Cryptics engineering team has the expertise and man power to move from a high level API to a low one.
Yeah they do not have the expertise. DX11 implementation has been around for years, yet it does not give any decent performance boost, nor any new fancy effects.
one of the major resource hogs is indeed the cpu.
One solution could be to not render the ui every single frame, but every other frame instead.
Or shift the workload to a seperate thread.
As for disappearing graphic effects, it really looks like an artificial limitation, one they have never ever acknowledged unfortunately. Any dev that had a any clue as to how the engine works pretty much left sto ages ago.
And for Neverwinter it was supposed not to improve Performance or add flashy GFX but for tesselation.
Feal free to correct me if i am wrong.
Still alot of People do tend to miss the Point that DX11 is not supposed to improve Performance but it could as it simpler and less intensive to realize certain results in DX11 compared to DX9.
DX11 implementation in both Cryptics flagship titles is woeful. Broken and bugged. I've used multiple gfx cards, and multiple drivers, all result in the same graphics corruption and crashing either in a short time or a longer time (30mins...). It does however make the game look a lot better (for the few mins it works) before corruption of textures sets in.
DX12 would be a benefit if issues such as netcode were addressed at the same time. Adding to that the need to address the underlying engine issues with rendering the UI and ingame assets like weapons fire and audio streams and the game could move forward and work pretty well. Currently there are indie games which feel more polished and have less issues in Early Access which is worrying.
On the DX12 Titan vs R9 290X benches, yup the Titan will lose. It'll still lose after DX12 ships as Nvidia are behind on compute performance vs AMD. Until they catch that up AMD will continue to outstrip them performance wise. The only games I see Nvidia being the better card to have are Nvidia sponsored games, mostly because they literally attempt to block the AMD cards working efficiently with them despite their claims in the press that they don't. Watch_Dogs was a good example of this, but there are others too.
TBH I'd really like Nvidia to just get off their proprietary horse and start to work with AMD more in support of the programs instead of just a nonsensical war to drive costs up. The whole G-Sync vs Freesync argument will fall on the side of Freesync, primarily because of ease of use. There's no extra hardware for the monitors, they'll end up cheaper than G-Sync and technically don't lock you into a gfx card choice, whereas G-Sync does. Eventually Nvidia will support Freesync because it makes financial sense to do so, but not before they've lost money over G-Sync. There are many situations where they have gone proprietary (CUDA... PhysX...) and it hasn't worked out for them, yet they still push that agenda with morbid determination, I just don't get it.
"You don't have to do something again and again and again repetitive that doesn't have much challange, that's just a general good gameplay thing."
For me, things like Crystalline Entity is unplayable on DX9 due to stutter and single digit framerates, but going with DX11 I get completely smooth framerate everywhere in STO. Of course the game is slightly more unstable in DX11, but that seems to be a general problem with games that weren't originally written for it.
As for the UI lagging the game: Yes, the framerate drops, but I don't get the lag except for severe server lag. Other games like SWTOR and Tera completely freeze when opening and closing any UI elements, and both run a lot smoother at much higher framerates than STO. I don't know much about coding graphics, but to me it looks like a general problem with putting 2D elements on top of a 3D environment, where those elements require a different registration of the mouse movements compared to everything around them.
For me, all the lag I have comes from server (connection) lag. I'm on 50 Mbit fibre, have a 5 Mbit cap on the transatlantic cable, but quite often the server connection while playing runs at 0-50 kbps and I've yet, in 3 years, seen it ever get above 1 Mbit. Only when the loader is downloading patches does it ever reach sensible connection speeds, but as soon as I get into the game it's like the servers run on dial-up. The game has phenomenal lag tolerance in that respect, but the rubber banding and unresponsiveness isn't necessarily all down to the netcode. Quite often it feels more like the servers simply aren't capable of pushing out data fast enough to support millions of players on one shard, instanced or not.
Coding for DX12 would however lock the game to Windows 8 (yuck) and 10, which means it'd maybe benefit a minority of STO players. We already have the issues with the DX11 support actually making things worse for some people, and a lot of players still being on Windows XP and on GPUs incapable of anything higher than DX9.
Also: All the major MMOs have performance problems, areas where the FPS drops to nothing, UI lag, and random unexplainable graphic problems that only hit a few people. It comes from the nature of MMO design, and the graphics API cannot do anything to prevent that.