videocamWeb Data Extraction Summit - September 30th, 2021.
Join some of the greatest minds in web scraping to educate, inspire, and innovate.
Register for free!
Start a new topic

Regarding Scrapy Mysql Integration

Business Requirements:

Using "Scrapy Cloud", I would like to configure few websites and keywords against each website.

The expectation from your "Scrapy Cloud" is that a scheduler should run and scan all websites at a regular interval [say X hours. X should be configurable from Scarpy Cloud].

Post "X" hour of crawling, your "Scarpy Cloud" should EXTRACT information like, "Title", "Author Name", "Date Of Publication", "Content","Article Url" from all configured websites and pass them to my MYSQL database [without any manual intervention]. And after every "X" hours, the same process will be repeated and only the new/updated articles (those fall under that specific "X" hour) information should be passed to my MYSQL database.


I followed below URL to integrate MySQL with locally installed Scrapy.

https://github.com/mysql/mysql-connector-python 

 

I am able to connect to MySQL but somehow data are not being pushed to MySQL table.

Attaching MySQL configurations file(s) and generated log for your reference. Could you please have a look and help us what exactly are we missing here?

In case you need any other further information(s) to troubleshoot the problem please let me know, will provide the same.


Looking forward to hearing from you guys.

 

Note: I am able to log extracted data to the console but not able to push that to MySQL.

py
(516 Bytes)
py
(1.12 KB)
log
(11.3 KB)
Login to post a comment