cores or flops
i have a gtx 960 and i am in the market to upgrade but seeing how much even used cards  on ebay are going for i was wondering is it the core count or the flops that i should be looking for?

right now all i can afford is a gtx 970 at $180 and it being double the flops of the 960 should i be looking for the flops rather than cores?
Looking to determine what? FLOP perf is completely irrelevant for hashcat. Core count, frequency and architecture determines how fast a card is.
i read somewhere that a gtx 1070 outperforms a 980ti

but if it has more flops then is the frequency higher?
Flops are "floating point operations per second" and calculates usually like this:

Cores * Clock * 2 = Flops

It's a very basic and technical value that does not relate to chipset instructions, algorithms, drivers, throttling or anything else that alters the output of hashcat. You might use it to compare two GPUs of the same family, but that's pretty much it. Although usually a GPU with 10.000 GFlops will perform better under hashcat than a GPU with only 2.000 GFlops.

It's the same with cores or clock rate. Just one value that alone does not tell you the speed of a GPU under a specific circumstance.

GTX 780Ti: 2880 cores, 920MHz, 5.1 TFlops
-> ~8 Gigahashes under MD5 with ver. 2.01 cuda

GTX980Ti: 2816 cores, 1075Mhz, 6.1 TFlops
-> ~14 Gigahashes under MD5 with ver. 2.01 cuda
-> ~17 Gigahashes under MD5 with ver. 3.x
@flomac but hashcat doesn't use floats, it is all INTs? So how does better gflops equate to faster speed? I thought it is about how many instructions can be computed per ms?