Hashtopus - distributed solution - Printable Version +- hashcat Forum (https://hashcat.net/forum) +-- Forum: Misc (https://hashcat.net/forum/forum-15.html) +--- Forum: User Contributions (https://hashcat.net/forum/forum-25.html) +--- Thread: Hashtopus - distributed solution (/thread-3159.html) |
RE: Hashtopus - distributed solution - curlyboi - 08-19-2014 VS 2010 on Win7. And yes, this is the correct way to setup brute force increment. When agent changes hardware, it will not be updated here. If you change hardware but stay on the same brand (Nvidia or AMD), you are OK. If you switch between them, you either want to re-register, or simply switch the combobox from one brand to another. This is the option that decides, if your agent gets oclHashcat or cudaHashcat RE: Hashtopus - distributed solution - Pyrex - 08-19-2014 (08-19-2014, 04:58 PM)curlyboi Wrote: VS 2010 on Win7. Re-Check my post, what do you think about the drop down solution? I edited the post. RE: Hashtopus - distributed solution - sharkchaser - 08-24-2014 I have been tinkering with Hashtopus on Amazon's EC2 service using Amazon's flavor of linux. I have the lamp stack working fine, and after a good deal of tinkering, I have been able to get an Agent virtual machine working complete with cuda environment and working mono. After this small bit of success, I created an image(AMI) of my instance so that I could deploy multiple agents at one time. I started up two of them. The first started up and I executed hashtopus.exe. The agent was shown in the agent list. On the second machine, I downloaded the hashtopus.exe from the server, gave it a new token, and it downloaded the cudahashcat binaries and looked ok. When I switched back over to the first instance deployed, I had this output: Hashtopus 0.8.8a Registering to server...Enter registration voucher: fBI6wn8W OK. Logging in to server...OK. Loading task...failed: No active tasks. Waiting for next assignment... Loading task...failed: No active tasks. Loading task...failed: No active tasks. Loading task...failed: No active tasks. Could not download hashcat: Access token invalid. Now the second agent shows up in the agent list, but not the first. Each instance has a different ip address, so that isn't the problem. Is there some unique identified that the server uses to identify the agents that I am unaware of? Interestingly enough, no matter which of the agents/servers is shown in the list, it shows a previously executed test session in the chunk list. I'm guessing there is some identifier that has to be changed between each vm. Thank you for your help. RE: Hashtopus - distributed solution - sharkchaser - 08-25-2014 The problem was no uid field was being populated in the database. To resolve this, I cleared out all agents in the db, started up an instance with a new token, added a uid by hand, then started the next. (08-24-2014, 07:36 AM)sharkchaser Wrote: I have been tinkering with Hashtopus on Amazon's EC2 service using Amazon's flavor of linux. I have the lamp stack working fine, and after a good deal of tinkering, I have been able to get an Agent virtual machine working complete with cuda environment and working mono. After this small bit of success, I created an image(AMI) of my instance so that I could deploy multiple agents at one time. I started up two of them. The first started up and I executed hashtopus.exe. The agent was shown in the agent list. On the second machine, I downloaded the hashtopus.exe from the server, gave it a new token, and it downloaded the cudahashcat binaries and looked ok. When I switched back over to the first instance deployed, I had this output: RE: Hashtopus - distributed solution - sharkchaser - 08-25-2014 Anyone have any experience using hcmask files with hashtopus? Does this screw up the keyspace calculations and chunking times? Should I lower chunking times or turn off the auto benchmark? The agents are running for many hours per chunk and it is hard to estimate how long these tasks are going to take. Because I am paying by the hour on EC2 I would like to be able estimate. RE: Hashtopus - distributed solution - curlyboi - 08-25-2014 (08-25-2014, 06:28 PM)sharkchaser Wrote: Anyone have any experience using hcmask files with hashtopus? Does this screw up the keyspace calculations and chunking times? Should I lower chunking times or turn off the auto benchmark? The agents are running for many hours per chunk and it is hard to estimate how long these tasks are going to take. Because I am paying by the hour on EC2 I would like to be able estimate. I vaguely remember hcmask files are the same as -i - you cant use them in distributed attacks RE: Hashtopus - distributed solution - sharkchaser - 08-25-2014 (08-25-2014, 06:53 PM)curlyboi Wrote:(08-25-2014, 06:28 PM)sharkchaser Wrote: Anyone have any experience using hcmask files with hashtopus? Does this screw up the keyspace calculations and chunking times? Should I lower chunking times or turn off the auto benchmark? The agents are running for many hours per chunk and it is hard to estimate how long these tasks are going to take. Because I am paying by the hour on EC2 I would like to be able estimate. Thank you. I'll just create a long list of mask attacks and execute them all at once. Thank you for your work on hashtopus. RE: Hashtopus - distributed solution - bitguard - 08-29-2014 my speed is cca 44300 MH/s. i'm trying to crack md5 hash that have 13 chars long in plain (a-z,A-Z,0-9,special chars -> 95 chars table). attack command is: -a 3 #HL ?a?a?a?a?a?a?a?a?a?a?a?a?a hashtopus shows me about 47 days to crack this hash (estimated time)... is this a correct??? (95^13/speed > 47days. am i wrong???) RE: Hashtopus - distributed solution - atom - 08-29-2014 it's wrong, it should take much much much much longer, like counting back to the days of dinosaurs should be less than that RE: Hashtopus - distributed solution - curlyboi - 08-29-2014 (08-29-2014, 01:13 PM)bitguard Wrote: my speed is cca 44300 MH/s. i'm trying to crack md5 hash that have 13 chars long in plain (a-z,A-Z,0-9,special chars -> 95 chars table). it should become more precise as more chunks will get cracked |