More importantly, it's wrong information to begin with because the cards have been tested and verified to compute shaders asynchronously, which was never disputed by Oxide to begin with. What they don't do well is context switching, because there's only one compute engine I assume, but they're so much faster that it takes a lot to bog them down anyway.
I don't have a bone to pick here, personally I'm a 3dfx man and loath both of the companies. Glide was far superior for performance when we got stuck with those assclowns at Microshit and their craptastic obfuscated rendering methods. We're actually getting closer to how we used to do things now, with Mantle and DX12 being more like Glide was 20 years ago as far as CTM goes. 
GCN performs better at higher levels of parallelism because they have a 64 command queue setup spread across 8 shaders, nVidia performs better at lower levels because they only have a single 31 command queue on one shader that is massively more efficient. These are not single queues that do 8 and 31 in order, they're 8 and 31 simultaneously. This hilarious mischaractarisation is the heart of the problem, you have to go back to the original Kepler, the 600 series, to have a single shader queue. AMD bloody well knows this too, because their compute queues are called ACE's for a reason, they're asynchronous computing engines, individually.
Maxwell's single shader engine setup is so much more efficient than the GCN shaders are, that it performs better even at four times the command queue's depth. Basically, you're far better off with a 980Ti until you get quite high in simultaneous shader commands. There is little loss on the GCN architecture as you go past it's queue depth, but it's basic job latency is very high. Now, performing shaders and graphics together without a performance loss doesn't seem to go very well on Maxwell, but it's so much faster with them independently that it still outperforms even with the context switching until you have very high levels of shader parallelism. GCN is basically so overbuilt with 8 ACE's on a Fury X, that it will run out of processing power long before it runs out of queues to load up in the typical workload.
This is all well documented by enterprising individuals that did testing of the two architectures to see what they were actually capable of. GCN architecture is highly future proofed, has been for years, with an emphasis on parallelism before it was even available for graphics. Maxwell architecture is not, but so much more efficient that the 980Ti can still keep up with a Fury X even while it's poor context switching bogs it down.