Skip to content

AtilioA/web-scraping-for-sentence-mining

Repository files navigation

web-scraping-for-sentence-mining

This repository contains scripts designed to scrape content from various websites for language learning. While most of the websites targeted are in my native language, the repository also includes support for Reverso Context, which works with multiple languages. These scripts download or generate audio, and then write all the scraped data to CSV files. You can then merge these files and import them into Anki, allowing you to create thousands of language-learning flashcards in just a few minutes.

See also: Morphman - for dealing with low-yield, repetitive, cards too short/long.

🏡 Running locally

First, clone the repository and create a virtual environment for the project (to ensure you won't have problems with your libraries' versions). Using virtualenvwrapper:

mkvirtualenv web-scraping-sentences - Creates a virtual environment

workon web-scraping-sentences - Activates virtual environment

Then, install all dependencies with pip install -r requirements.txt.

Additional Setup for WaveNet

If you want to use WaveNet, you need to get your key on Google Cloud Platform and fill it in api_key.json, following the example from api_key_example.json.

Scraping with Reverso

To scrape Reverso, head to reverso_scraping/ and refer to scrap.py. Input URLs you want to scrape into scrap_page(). By default, the output will be placed under audios/ for WaveNet audios and under csv/ for csv files (the tab character, \t, is used as separator) ready to be imported to Anki. You can also crawl URLs for words and expressions with crawl_top(), which can retrieve URLs for common words and expressions, present in rankings generated by Reverso. This function will write URLs into a .txt file which can be used with scrap_pages_multithread(), written to scrap multiple pages in parallel. A few examples are left commented out in scrap.py.