Skip to content

intrudir/domainExtractor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 

Repository files navigation

domainExtractor

Extract domains/subdomains/FQDNs from files and URLs

Installation:

git clone https://github.com/intrudir/domainExtractor.git

Usage Examples:

Run script without args to see usage
python3 domainExtractor.py
usage: domainExtractor.py [-h] [-f INPUTFILE] [-u URL] [-t TARGET] [-v]

This script will extract domains from the file you specify and add it to a final file

optional arguments:
  -h, --help            show this help message and exit
  -f INPUTFILE, --file INPUTFILE
                        Specify the file to extract domains from
  -u URL, --url URL     Specify the web page to extract domains from. One at a time for now
  -t TARGET, --target TARGET
                        Specify the target top-level domain you'd like to find and extract e.g. uber.com
  -v, --verbose         Enable slightly more verbose console output

Matching a specified target domain

Specify your source and a target domain to search for and extract.

Extracting from files

Using any file with text in it, extracting all domains with yahoo.com as the TLD.
python3 domainExtractor.py -f ~/Desktop/yahoo/test/test.html -t yahoo.com

It will extract, sort and dedup all domains that are found.

image

You can specify multiple files using commas (no spaces)

python3 domainExtractor.py -f amass.playstation.net.txt,subfinder.playstation.net.txt --target playstation.net

image

Example output:

image

Extracting from a web page

Pulling data directly from Yahoo.com's homepage extracting all domains with 'yahoo.com' as the TLD.
python3 domainExtractor.py -u "https://yahoo.com" -t yahoo.com

image

Specifying all domains

You can either omit the --target flag completely, or specify 'all' and it will extract all domains it finds (at the moment .com, .net, .org, .tv, .io)
# pulling from a file, extract all domains
python3 domainExtractor.py -f test.html --target all

# pull from yahoo.com home page, extract all domains. No target specified defaults to 'all'
python3 domainExtractor.py -u "https://yahoo.com"

image

Example output:

image

Domains not previously found

If you run the script again while checking for the same target, a few things occur:
1) if you already have a final file for it it will notify you of domains you didnt have before
2) It will append them to the final file
3) It will log the new domain to logs/newdomains.{target}.txt with date & time found


This allows you to check the same target across multiple files and be notified of any new domains found!

I first use it against my Amass results, then against my Subfinder results.
The script will sort and dedup as well as notify me of how many new, unique domains came from subfinder's results.

image

It will add them to the final file and log just the new ones to logs/newdomains.{target}.txt

image