Duplicut – Remove Duplicates From MASSIVE Wordlist, Without Sorting It (For Dictionary-Based Password Cracking)

360 Mobile Vision - 360mobilevision.com North & South Carolina Security products and Systems Installations for Commercial and Residential - $55 Hourly Rate. ACCESS CONTROL, INTRUSION ALARM, ACCESS CONTROLLED GATES, INTERCOMS AND CCTV INSTALL OR REPAIR 360 Mobile Vision - 360mobilevision.com is committed to excellence in every aspect of our business. We uphold a standard of integrity bound by fairness, honesty and personal responsibility. Our distinction is the quality of service we bring to our customers. Accurate knowledge of our trade combined with ability is what makes us true professionals. Above all, we are watchful of our customers interests, and make their concerns the basis of our business.

Quickly dedupe massive wordlists, without changing the order 

Created by nil0x42 and contributors


Modern password wordlist creation usually implies concatenating multiple data sources.

Ideally, most probable passwords should stand at start of the wordlist, so most common passwords are cracked instantly.

With existing dedupe tools you are forced to choose if you prefer to preserve the order OR handle massive wordlists.

Unfortunately, wordlist creation requires both:

So i wrote duplicut in highly optimized C to address this very specific need

Quick start

git clone https://github.com/nil0x42/duplicut
cd duplicut/ && make
./duplicut wordlist.txt -o clean-wordlist.txt



  • Features:

    • Handle massive wordlists, even those whose size exceeds available RAM
    • Filter lines by max length (-l option)
    • Can remove lines containing non-printable ASCII chars (-p option)
    • Press any key to show program status at runtime.
  • Implementation:

    • Written in pure C code, designed to be fast
    • Compressed hashmap items on 64 bit platforms
    • Multithreading support
    • [TODO]: Use huge memory pages to increase performance
  • Limitations:

    • Any line longer than 255 chars is ignored
    • Heavily tested on Linux x64, mostly untested on other platforms.

Technical Details

1- Memory optimized:

An uint64 is enough to index lines in hashmap, by packing size info within pointer’s extra bits:

2- Massive file handling:

If whole file can’t fit in memory, it is split into Remove duplicates from MASSIVE wordlist, without sorting it (for dictionary-based password cracking) (8) virtual chunks, then each one is tested against next chunks.

So complexity is equal to Remove duplicates from MASSIVE wordlist, without sorting it (for dictionary-based password cracking) (9)th triangle number:


If you find a bug, or something doesn’t work as expected, please compile duplicut in debug mode and post an issue with attached output:

# debug level can be from 1 to 4
make debug level=1
./duplicut [OPTIONS] 2>&1 | tee /tmp/duplicut-debug.log

By admin