Flops are "floating point operations per second" and calculates usually like this:
Cores * Clock * 2 = Flops
It's a very basic and technical value that does not relate to chipset instructions, algorithms, drivers, throttling or anything else that alters the output of hashcat. You might use it to compare two GPUs of the same family, but that's pretty much it. Although usually a GPU with 10.000 GFlops will perform better under hashcat than a GPU with only 2.000 GFlops.
It's the same with cores or clock rate. Just one value that alone does not tell you the speed of a GPU under a specific circumstance.
Example:
GTX 780Ti: 2880 cores, 920MHz, 5.1 TFlops
-> ~8 Gigahashes under MD5 with ver. 2.01 cuda
GTX980Ti: 2816 cores, 1075Mhz, 6.1 TFlops
-> ~14 Gigahashes under MD5 with ver. 2.01 cuda
-> ~17 Gigahashes under MD5 with ver. 3.x
Cores * Clock * 2 = Flops
It's a very basic and technical value that does not relate to chipset instructions, algorithms, drivers, throttling or anything else that alters the output of hashcat. You might use it to compare two GPUs of the same family, but that's pretty much it. Although usually a GPU with 10.000 GFlops will perform better under hashcat than a GPU with only 2.000 GFlops.
It's the same with cores or clock rate. Just one value that alone does not tell you the speed of a GPU under a specific circumstance.
Example:
GTX 780Ti: 2880 cores, 920MHz, 5.1 TFlops
-> ~8 Gigahashes under MD5 with ver. 2.01 cuda
GTX980Ti: 2816 cores, 1075Mhz, 6.1 TFlops
-> ~14 Gigahashes under MD5 with ver. 2.01 cuda
-> ~17 Gigahashes under MD5 with ver. 3.x