Hashtopus - distributed solution - Printable Version +- hashcat Forum (https://hashcat.net/forum) +-- Forum: Misc (https://hashcat.net/forum/forum-15.html) +--- Forum: User Contributions (https://hashcat.net/forum/forum-25.html) +--- Thread: Hashtopus - distributed solution (/thread-3159.html) |
Hashtopus - distributed solution - curlyboi - 02-18-2014 Hashtopus - distributed GPU hashcat wrapper Download: http://hashtopus.nech.me/beta (just grab the file with the highest number) Install guide: http://www.youtube.com/watch?v=cazDoJhJvTM Github: https://github.com/curlyboi?tab=repositories Architecture - Computing agent in C#.NET 2.0 running on Windows or under Mono on Linux - PHP web server + MySQL - PHP web admin It has too many features to cover here. Check the manual inside installation package or at least the video. RE: Hashtopus - distributed solution - Kuci - 02-18-2014 Well, it seems like a good idea. However, making it for Window$ is not a good idea. If you think that making it for unix-based systems is beyound your skills, then just learn how to develope for it And one more advice, learn C++ or JAVA instead of C#. They're much more useful. RE: Hashtopus - distributed solution - curlyboi - 02-18-2014 Like I said in the first post, it is aimed for gamers, 99% of which systems are based on Windows, although with Steam on Linux that might change soon. I am not a programmer, so learning a new programming language is not very interesting for me, and to achieve such level that I would be able to create a software in equal quality I am now developing in C# would take me a lot of time which I don't have. RE: Hashtopus - distributed solution - curlyboi - 02-19-2014 Nontheless I am writing it in 2.0 .NET so it might as well run under Mono RE: Hashtopus - distributed solution - curlyboi - 02-25-2014 Gentlemen, I am standing before a tough problem. I have constructed hashtopus with great agent unstability in mind because it is designed for agent deployment on computers which are not dedicated for hash cracking. Basically, I expect the agent could disconnect without warning any second. That is why I transfer cracked hashes to server almost in real time (using a small buffer) and why I dispatch chunks of relatively small time worth of computing (default 5 minutes). That is also why I wanted to implement protection against mid-chunk interruption. Basically, with each cracked hashes buffer flush, I would also report in what part of keyspace am I currently in the cracking, so the server could keep track how much of the incomplete chunks was already performed. Should the agent die mid-cracking, the reassign to another agent feature (already implemented) wouldn't have to reasign the whole chunk but only the remaining part. But recently I have discovered that if the base loop of keyspace (depends for example on -n parameter and first letter of brute force mask) is too big, the .restore file might not get updated the whole 5 minutes chunk. That means I have to find other way to track individual chunk progress. One way that popped to my mind was to have the hashcat periodically output [s]tatus, but since I normaly crack with --quiet parameter and read the output to achieve instant event-driven output capturing, I would have to rewrite to be file-based which would create even bigger problems since no virtual files exist on Windows. I am therefore asking if someone of you doesn't see anything to solve this I am missing. RE: Hashtopus - distributed solution - mastercracker - 02-25-2014 Since you say that you dispatch small chunks (5 minutes), I don't think that it's a big deal to simply re-crack the chunk from the start. Otherwise, you can set --restore-timer=60 so that it save the restore file every minutes. RE: Hashtopus - distributed solution - curlyboi - 02-26-2014 (02-25-2014, 07:32 PM)mastercracker Wrote: Since you say that you dispatch small chunks (5 minutes), I don't think that it's a big deal to simply re-crack the chunk from the start. Otherwise, you can set --restore-timer=60 so that it save the restore file every minutes. Thank you for this reaction. Unfortunately, even if you force --restore-timer=1, the file gets updated but the contents only update when the base loop finishes. So in many cases, the file gets re-written every second, but the contents only change every few minutes or so... Just try it yourself. As for the chunk size - if I knew I could update the chunk position to the server everytime I submit hashes buffer, I could make much longer chunks. My goal is simply to eliminate duplicate work. Hashtopus Screenshots - curlyboi - 03-01-2014 Hi, I would like to share some screenshots from the web GUI development. Please excuse shitty design, I am in no way a web designer. Agent list: Agent detail: Hashlist list: Hashlist detail: Hashlist hashes: Task list: Task detail: New hashlist: New task: RE: Hashtopus - distributed solution - goat - 03-01-2014 Very nice job RE: Hashtopus - distributed solution - ati6990 - 03-03-2014 awesome dude , cool job. keep us up2date |