Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Sorted HUGE Wordlists
#1
Hi, everyone!
I use hashtopolis to brute-force with huge size of wordlists!
Can anyone help me:
- How I can merge many differents huge wordlists, for example I have used cat *.txt >> merged_wordlist.txt and my merged_wordlist.txt  will be about 1 TB size

Then I used different CLI such as "sort", "gawk" and others, but no one of this tool work properly, I have recived bad wordlists or some errors.

And I have a big problem with it!

Maybe somebody can help me? How I can merged huge wordlists then remove duplicates then spilt it to small chunks for hashtopolis!

And if Ihave new wordlist, how I must to sort and remove duplicates from this new_wordlist.

Maybe I need to use some DB, where will be all of my wordlists, then if I have new wordlist I load it to DB, remove duplicates and then out new_wordlist.

Thx
Reply
#2
Why do you brute-force with wordlists? Why not using hashcat masks?
Reply
#3
(07-12-2020, 02:59 PM)s3in!c Wrote: Why do you brute-force with wordlists? Why not using hashcat masks?
Hi s3in!c!
I use wordlist, because these wordlists consists of best wordlists with different passwords, that's why I use huge wordlists, and I cann't use mask attack
And for I have problem, how merge these wordlists, remove duplicates, and if I get other wordlist how I can merge it to others my wordlists, and then remove duplicates and etc...
I think for this task maybe I must use DB, for example mysql with unique keys....
Maybe do You have any ideas for it?
Reply
#4
The safest way to sort something uniquely is probably sort on linux. I always do
Code:
LC_ALL=C sort -u <file>
the LC_ALL sets that sort is not trying to parse anything as a higher level encoding, so it just sorts and compares on byte level, which is what you typically want on very large wordlists.

With a DB you are definitely out of luck, there is so much overhead, and mysql is not able to handle large amounts of data fast without huge hardware.
Reply
#5
(07-15-2020, 12:43 PM)s3in!c Wrote: The safest way to sort something uniquely is probably sort on linux. I always do
Code:
LC_ALL=C sort -u <file>
the LC_ALL sets that sort is not trying to parse anything as a higher level encoding, so it just sorts and compares on byte level, which is what you typically want on very large wordlists.

With a DB you are definitely out of luck, there is so much overhead, and mysql is not able to handle large amounts of data fast without huge hardware.
Ok, and It's LC_ALL=C sort -u <file> help me with file of size 1 TB???
Reply
#6
Yes, as long as you have enough free space that sort can store intermediary results in /tmp (or a directory which you can set with -T).
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)