02-01-2017, 11:40 PM
It's crawling good and finishes after a time but the the output file I determined isn't there. It saves a file with the webpage name in the Wordhound folder but it's empty. What I tried:
-running with sudo
-setting the output files manually (to the wordhound folder itself and in other folders)
-leaving the option
-different webpages
Basically I want to extract as many words as possible from a webpage. Are there alternatives probably?
(Not directly Hashcat related but I take the risk to ask here.)
-running with sudo
-setting the output files manually (to the wordhound folder itself and in other folders)
-leaving the option
-different webpages
Basically I want to extract as many words as possible from a webpage. Are there alternatives probably?
(Not directly Hashcat related but I take the risk to ask here.)