執行 countries 時出問題.....

請教各位先進,
我依照課程在Section 4. Project 1 Spiders from A to Z 時
執行 countries 時出現下面錯誤,無法繼續執行,請問是遇到什麼問題了呢?
我自己是覺得應該是 DEBUG: Rule at line 2 without any user agent to enforce it on.
這裡出問題,但不知道該如何處理,請問有人跟我一樣嗎?

(Virtual_Workspace) C:\Users\jeff\projects\worldmeters>scrapy crawl countries
2022-12-11 12:21:49 [scrapy.utils.log] INFO: Scrapy 2.7.1 started (bot: worldmeters)
2022-12-11 12:21:49 [scrapy.utils.log] INFO: Versions: lxml 4.9.1.0, libxml2 2.9.14, cssselect 1.2.0, parsel 1.7.0, w3lib 2.1.0, Twisted 22.2.0, Python 3.8.15 (default, Nov 24 2022, 14:38:14) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 22.0.0 (OpenSSL 1.1.1s 1 Nov 2022), cryptography 37.0.4, Platform Windows-10-10.0.19042-SP0
2022-12-11 12:21:49 [scrapy.crawler] INFO: Overridden settings:
{‘BOT_NAME’: ‘worldmeters’,
‘NEWSPIDER_MODULE’: ‘worldmeters.spiders’,
‘REQUEST_FINGERPRINTER_IMPLEMENTATION’: ‘2.7’,
‘ROBOTSTXT_OBEY’: True,
‘SPIDER_MODULES’: [‘worldmeters.spiders’],
‘TWISTED_REACTOR’: ‘twisted.internet.asyncioreactor.AsyncioSelectorReactor’}
2022-12-11 12:21:49 [asyncio] DEBUG: Using selector: SelectSelector
2022-12-11 12:21:49 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2022-12-11 12:21:49 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2022-12-11 12:21:49 [scrapy.extensions.telnet] INFO: Telnet Password: 31cecf90769cba70
2022-12-11 12:21:49 [scrapy.middleware] INFO: Enabled extensions:
[‘scrapy.extensions.corestats.CoreStats’,
‘scrapy.extensions.telnet.TelnetConsole’,
‘scrapy.extensions.logstats.LogStats’]
2022-12-11 12:21:50 [scrapy.middleware] INFO: Enabled downloader middlewares:
[‘scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware’,
‘scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware’,
‘scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware’,
‘scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware’,
‘scrapy.downloadermiddlewares.useragent.UserAgentMiddleware’,
‘scrapy.downloadermiddlewares.retry.RetryMiddleware’,
‘scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware’,
‘scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware’,
‘scrapy.downloadermiddlewares.redirect.RedirectMiddleware’,
‘scrapy.downloadermiddlewares.cookies.CookiesMiddleware’,
‘scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware’,
‘scrapy.downloadermiddlewares.stats.DownloaderStats’]
2022-12-11 12:21:50 [scrapy.middleware] INFO: Enabled spider middlewares:
[‘scrapy.spidermiddlewares.httperror.HttpErrorMiddleware’,
‘scrapy.spidermiddlewares.offsite.OffsiteMiddleware’,
‘scrapy.spidermiddlewares.referer.RefererMiddleware’,
‘scrapy.spidermiddlewares.urllength.UrlLengthMiddleware’,
‘scrapy.spidermiddlewares.depth.DepthMiddleware’]
2022-12-11 12:21:50 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2022-12-11 12:21:50 [scrapy.core.engine] INFO: Spider opened
2022-12-11 12:21:50 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-12-11 12:21:50 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-12-11 12:21:51 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://www.worldometers.info/robots.txt> (referer: None)
2022-12-11 12:21:51 [protego] DEBUG: Rule at line 2 without any user agent to enforce it on.
2022-12-11 12:21:51 [protego] DEBUG: Rule at line 10 without any user agent to enforce it on.
2022-12-11 12:21:51 [protego] DEBUG: Rule at line 12 without any user agent to enforce it on.
2022-12-11 12:21:51 [protego] DEBUG: Rule at line 14 without any user agent to enforce it on.
2022-12-11 12:21:51 [protego] DEBUG: Rule at line 16 without any user agent to enforce it on.
2022-12-11 12:21:51 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.worldometers.info/> (referer: None)
2022-12-11 12:21:51 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.worldometers.info/> (referer: None)
Traceback (most recent call last):
File “C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\utils\defer.py”, line 240, in iter_errback
yield next(it)
File “C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\utils\python.py”, line 338, in next
return next(self.data)
File “C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\utils\python.py”, line 338, in next
return next(self.data)
File “C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\core\spidermw.py”, line 79, in process_sync
for r in iterable:
File “C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\spidermiddlewares\offsite.py”, line 29, in
return (r for r in result or () if self._filter(r, spider))
File “C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\core\spidermw.py”, line 79, in process_sync
for r in iterable:
File “C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\spidermiddlewares\referer.py”, line 336, in
return (self._set_referer(r, response) for r in result or ())
File “C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\core\spidermw.py”, line 79, in process_sync
for r in iterable:
File “C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\spidermiddlewares\urllength.py”, line 28, in
return (r for r in result or () if self._filter(r, spider))
File “C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\core\spidermw.py”, line 79, in process_sync
for r in iterable:
File “C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\spidermiddlewares\depth.py”, line 32, in
return (r for r in result or () if self._filter(r, response, spider))
File “C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\core\spidermw.py”, line 79, in process_sync
for r in iterable:
File “C:\Users\jeff\projects\worldmeters\worldmeters\spiders\countries.py”, line 23, in parse
yield response.follow(url=link)
UnboundLocalError: local variable ‘link’ referenced before assignment
2022-12-11 12:21:51 [scrapy.core.engine] INFO: Closing spider (finished)
2022-12-11 12:21:51 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{‘downloader/request_bytes’: 460,
‘downloader/request_count’: 2,
‘downloader/request_method_count/GET’: 2,
‘downloader/response_bytes’: 15062,
‘downloader/response_count’: 2,
‘downloader/response_status_count/200’: 1,
‘downloader/response_status_count/404’: 1,
‘elapsed_time_seconds’: 1.175927,
‘finish_reason’: ‘finished’,
‘finish_time’: datetime.datetime(2022, 12, 11, 4, 21, 51, 657758),
‘httpcompression/response_bytes’: 78180,
‘httpcompression/response_count’: 2,
‘log_count/DEBUG’: 10,
‘log_count/ERROR’: 1,
‘log_count/INFO’: 10,
‘response_received_count’: 2,
‘robotstxt/request_count’: 1,
‘robotstxt/response_count’: 1,
‘robotstxt/response_status_count/404’: 1,
‘scheduler/dequeued’: 1,
‘scheduler/dequeued/memory’: 1,
‘scheduler/enqueued’: 1,
‘scheduler/enqueued/memory’: 1,
‘spider_exceptions/UnboundLocalError’: 1,
‘start_time’: datetime.datetime(2022, 12, 11, 4, 21, 50, 481831)}
2022-12-11 12:21:51 [scrapy.core.engine] INFO: Spider closed (finished)

請問有先用 scrapy shell,一一確認過指令正確與否嗎?

我的意思是:依照老師上課說明的方式,用 scrapy shell 找出 xpath 或 css 的指令,然後寫進爬蟲 python 檔案中。因為網站常常在改版,如果直接用老師提供的原始碼,可能會找不到要爬取的資料。

補充:有問題就提出來很好,不要等到每週討論時才說,這是共同學習的優點之一。未來請不要害羞,有問題請繼續提出。以下的說明,只是描述我看到問題時,嘗試找出問題的步驟,這步驟可能對部分朋友有幫助。有可能還是沒解決問題(因為你沒提供原始碼),也有可能有更好的解決方式。

在 google 搜尋,你認為錯誤的原因 without any user agent to enforce it on ,選了兩篇日期較新的 Stack Overflow 文章。說明如下:


第一篇

錯誤的原因,出在 allowed_domains 的結尾,多加了 / 符號。

老師的課程,有特別說明不可以加,所以假設不是這個原因。


第二篇

這篇講兩件事:

  1. 不管網站中,禁止爬蟲的說明(不曉得目前進度說了沒)

在 settings.py 中,設定 ROBOTSTXT_OBEY = False

ROBOTSTXT_OBEY = False

補充:紙上談兵。純推論,沒實際去試。

  1. xpath 指令沒下對,找不到相關資料。

這是一開始就問你《請問有先用 scrapy shell,一一確認過指令正確與否嗎?》的原因。

可能是這個 … UnboundLocalError: local variable ‘link’ referenced before assignment
不妨貼出原代碼
比較好查找

1個讚

感謝兩位大大的回覆,countries.py 程式碼如下再請協助解惑,感恩 。
另外,依照sky大的建議,有嘗試去更改 settings.py 中的設定 ROBOTSTXT_OBEY = False
不過結果還是不行,應該不是這裡的問題。
至於 2. xpath 指令沒下對,找不到相關資料。

這是一開始就問你《請問有先用 scrapy shell,一一確認過指令正確與否嗎?》的原因。

我不太清楚這該怎麼做,如何確認指令正確與否 ? 可否給我個範例呢 ?
我是執行這行程式 :

countries.py 程式碼如下 :

import scrapy

class CountriesSpider(scrapy.Spider):
    name = 'countries'
    allowed_domains = ['www.worldometers.info']
    start_urls = ['https://www.worldometers.info/']

    def parse(self, response):
        countries = response.xpath("//td/a").getall()

        for country in countries:
            name = country.xpath(".//text()").get()
            link = country.xpath("//@gref").get()
        
        # absolute_url = f"https://www.worldometers.info{link}"
        # absolute_url = response.urljoin(link)

        yield response.follow(url=link)

應該是單純打錯字 第十行 link=country.xpath(“//@href”).get()

感謝~ 沒發現到打錯字,
不過剛剛改完,還是一樣,
搞不太懂到底怎麼了…

(Virtual_Workspace) C:\Users\jeff\projects\worldmeters>scrapy crawl countries
2022-12-13 21:31:01 [scrapy.utils.log] INFO: Scrapy 2.7.1 started (bot: worldmeters)
2022-12-13 21:31:01 [scrapy.utils.log] INFO: Versions: lxml 4.9.1.0, libxml2 2.9.14, cssselect 1.2.0, parsel 1.7.0, w3lib 2.1.0, Twisted 22.2.0, Python 3.8.15 (default, Nov 24 2022, 14:38:14) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 22.0.0 (OpenSSL 1.1.1s  1 Nov 2022), cryptography 37.0.4, Platform Windows-10-10.0.19042-SP0
2022-12-13 21:31:01 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'worldmeters',
 'NEWSPIDER_MODULE': 'worldmeters.spiders',
 'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
 'SPIDER_MODULES': ['worldmeters.spiders'],
 'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2022-12-13 21:31:01 [asyncio] DEBUG: Using selector: SelectSelector
2022-12-13 21:31:01 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2022-12-13 21:31:01 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2022-12-13 21:31:01 [scrapy.extensions.telnet] INFO: Telnet Password: c1a441ec7655dbf6
2022-12-13 21:31:01 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2022-12-13 21:31:02 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2022-12-13 21:31:02 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2022-12-13 21:31:02 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2022-12-13 21:31:02 [scrapy.core.engine] INFO: Spider opened
2022-12-13 21:31:02 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-12-13 21:31:02 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-12-13 21:31:03 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.worldometers.info/> (referer: None)
2022-12-13 21:31:03 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.worldometers.info/> (referer: None)
Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\utils\defer.py", line 240, in iter_errback
    yield next(it)
  File "C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\utils\python.py", line 338, in __next__
    return next(self.data)
  File "C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\utils\python.py", line 338, in __next__
    return next(self.data)
  File "C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\core\spidermw.py", line 79, in process_sync
    for r in iterable:
  File "C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in <genexpr>
    return (r for r in result or () if self._filter(r, spider))
  File "C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\core\spidermw.py", line 79, in process_sync
    for r in iterable:
  File "C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 336, in <genexpr>
    return (self._set_referer(r, response) for r in result or ())
  File "C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\core\spidermw.py", line 79, in process_sync
    for r in iterable:
  File "C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 28, in <genexpr>
    return (r for r in result or () if self._filter(r, spider))
  File "C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\core\spidermw.py", line 79, in process_sync
    for r in iterable:
  File "C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 32, in <genexpr>
    return (r for r in result or () if self._filter(r, response, spider))
  File "C:\ProgramData\Anaconda3\envs\Virtual_Workspace\lib\site-packages\scrapy\core\spidermw.py", line 79, in process_sync
    for r in iterable:
  File "C:\Users\jeff\projects\worldmeters\worldmeters\spiders\countries.py", line 23, in parse
    yield response.follow(url=link)
UnboundLocalError: local variable 'link' referenced before assignment
2022-12-13 21:31:03 [scrapy.core.engine] INFO: Closing spider (finished)
2022-12-13 21:31:03 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 225,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 13835,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'elapsed_time_seconds': 0.905087,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2022, 12, 13, 13, 31, 3, 209293),
 'httpcompression/response_bytes': 76239,
 'httpcompression/response_count': 1,
 'log_count/DEBUG': 4,
 'log_count/ERROR': 1,
 'log_count/INFO': 10,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'spider_exceptions/UnboundLocalError': 1,
 'start_time': datetime.datetime(2022, 12, 13, 13, 31, 2, 304206)}
2022-12-13 21:31:03 [scrapy.core.engine] INFO: Spider closed (finished)

這一行還是在,確定是否有存檔,再執行

感謝兩位大大的協助,剛剛不斷嘗試,終於找到問題點,
原來是 yield 沒有縮進,所以始終不成功,現在總算可以繼續課程了,謝謝你們~~~ ^^

    def parse(self, response):
        countries = response.xpath("//td/a")
        for country in countries:
            name = country.xpath(".//text()").get()
            link = country.xpath(".//@href").get()
        
            # absolute_url = f"https://www.worldometers.info{link}"
            # absolute_url = response.urljoin(link)

            yield response.follow(url=link)