The premise of matching modules and package one-to-one is I think incorrect. The way I see it, a package is just a library with a description of its dependencies, and possibly an already compiled binary. It could contains multiple modules. Plus, you probably need to link with a specific package version, and the module proposal doesn't have any way to specify a module version. So you can't just extract package dependencies from module usage.
Gamedev is a good candidate. Compiling in parallel, running different processes that transform resources, running a not yet optimized game engine in debug build, etc.
You can get "true 4k" out of the console, but rendering at 4K is so taxing you'll chew all the extra processing power out of that GPU.
That means that if developers want to increase rendering quality other than spatial resolution, they will probably resort to checkerboard rendering and/or dynamic resolution, which they needed to do on PS4 Pro.
Of course, overall, you can probably expect better quality than a PS4 Pro, since there's more power. It's just not very wise to use that to output at 4k, especially when most people will have a hard time noticing the difference at living room distance from their TV.
Agreed. They are pushing 4K for marketing, but if developers largely aimed for 1080p, 60fps with HDR, the games would probably look a lot better, especially since most people do not have a 4K TV that they sit close enough to to even tell the difference.
1440p might be a nice middle ground for people with 4k TVs. Depends how much antialiasing you can drop when you bump the resolution. Personally I'd be fine with chrckerboard rendering if it looks good in practice and allows higher quality graphics. True 4K could be reserved for VR.
Yea. 60fps is tricky though, because the CPU is not twice as fast. With that GPU power you could render at 60fps at 1080p, but your game simulation might have a hard time reaching that speed.
Yea. It's not very hard - if you want something responsive, you have a limited amount of time between when you read input and when you present something on screen.
This delta can be greater than the rate at which you present images. For example, most game engines take 30ms to read input and simulate on a thread, then take 30ms to renders on another thread while the next frame is simulated. This means that they present something every 30ms, but the delay between input and rendering is 60ms. This delay can be up to 100ms and not be noticed by players.
A 100ms delay is extremely noticeable, especially for faster paced games. That's 6 frames at 60FPS, or 3 at 30. While you may get away with that on 4X games, no FPS will ever get to more than 50ms input lag, because it seriously becomes unpleasant to play.
I might be misunderstanding what you're saying here, but from what you've said it sounds like the game would run at ~30fps, whereas many games are designed ideally to run at 60fps, which would allow only ~16ms to take input and simulate?
I think the pain point of DX12 is it's incompatibility with Windows 7 (and 8). However, Windows 10 is getting a lot of traction and has surpassed Windows 7 in gamers market share (see http://store.steampowered.com/hwsurvey/directx/).
With that in mind, if you are working on a AAA game that will come out in a couple of years, your target platforms will most likely be PS4, Xbox One and Windows 10. With those three, Vulkan doesn't make sense; DX12 will be used on Windows and Xbox One, and PS4 has its own proprietary API. Why complicate things with another API if you can use (almost) the same implementation for Xbox One and Windows 10?
If Windows 7 proves itself to be still somewhat popular amongs gamers, then Vulkan might be used to port games to this platform, since it's similar to DX12 so it will be way easier than a DX11 port.
However, right now, if game developers wants to use the latest API, Vulkan is a good choice since you will still be able to target Windows 7. If a lot of big games are released with Vulkan within 2 years, then it might slow down Windows 10's adoption and the market share of Vulkan systems (Windows Vista and up) will stay greater than the DX12 one (Windows 10) for longer. But for how many years? At one point, Windows 10 (or whatever's after) will dominate and the backward compatibility of Vulkan will stop being a useful marketing point.
So I'm not sure why you say there's no reason to use DX12. There's many. There's also a lot of good reasons to choose Vulkan that I didn't talk about here. I just don't see how Vulkan can dominates the Windows market, much like OpenGL.
Actually companies make it hard on purpose to find an email address or contact form, because support is expensive. It's a well known trick to burry that deep within some bullshit interface with answers to questions you don't care about. I always find the "Was this helpful?" questions frustrating on support pages, because they don't have a "No, I want to freaking talk to somebody" button.
This is a great application of generally accepted software engineering principles (and some less common like "cleanup week"), but it would have been nice to see some real metric instead of some napkin graphs.
I like this format. It feels like watching a good advertisement instead of one that just screams "BUY!". It's great because at the end of the day, there's some education, and something to be taken away. Even if that thing is a write up you can show your boss to convince her that it's a good idea to pick up some of these practices. There's some value at the end of the day so yeyy :D.
And honestly, I'd love to work for a team like this. I'm starving for a team that'll kick my ass and let me learn what it's like to work with the best practices.
Not that what I do right now is bad. Education wise though it's fairly stagnant.
> but it would have been nice to see some real metric instead of some napkin graphs
That's roughly why software engineering isn't really engineering. We're too cool for quantification. We feel that we don't need to give solid metrics to justify investment on what we think is cool. But, boy, if management asks us something that we don't like, then they'll have to handle solid and quantifiable evidence on the utility of it.
This is just an idea, since I never built such filter, but you could automate a large part of filtering NSFW images. A quick search on google lead to this paper : http://cs229.stanford.edu/proj2005/HabisKrsmanovic-ExplicitI...
Once you have that in place, I guess it's better to make it agressive and report false positive as NSFW.
Google "safe image search" has the additional help of searching the content of the page the image is used. You might be able to do the same, up to some limit, by checking the http referer header field to know where requests are coming from. You could scan the referer's page for some keywords. This might give you a better idea of the context where the image is used. Note that this might be tricky, since you probably don't want traffic coming out of your server to some child porn site.
That said, those are just some ideas. Youtube has a good community that flags videos, but also an army of reviewer that look at the flagged content.
Exactly, and in fact it's usually not as simple as "you just work on the previous frame's data". There's a bunch of data dependencies that needs to be respected during a frame update. For example, if you update the player position wrt user input, you will want to update its animation in the same update, and also check if its not coliding with anything. Otherwise you're just lagging behind.
That's a problem only if your engine runs so slow that the changes between frames are relatively large. They shouldn't be, otherwise you'll end up with inconsistencies no matter how many sources of data you use during the calculation of the new frame. It's entirely legal to run your 'world' locally at a higher rate to resolve such issues than the display rate.