全網最全python爬蟲系統進階學習(附原代碼)學完可就業(爬蟲python入門代碼)
個人公眾號 yk 坤帝

后臺回復 scrapy 獲取整理資源
第一章 爬蟲介紹
1.認識爬蟲
第二章:requests實戰(基礎爬蟲)
1.豆瓣電影爬取
2.肯德基餐廳查詢
3.破解百度翻譯
4.搜狗首頁
5.網頁采集器
6.藥監總局相關數據爬取
第三章:爬蟲數據分析(bs4,xpath,正則表達式)
1.bs4解析基礎
2.bs4案例
3.xpath解析基礎
4.xpath解析案例-4k圖片解析爬取
5.xpath解析案例-58二手房
6.xpath解析案例-爬取站長素材中免費簡歷模板
7.xpath解析案例-全國城市名稱爬取
8.正則解析
9.正則解析-分頁爬取
10.爬取圖片
第四章:自動識別驗證碼
1.古詩文網驗證碼識別
fateadm_api.py(識別需要的配置,建議放在同一文件夾下)
調用api接口
第五章:request模塊高級(模擬登錄)
1.代理操作
2.模擬登陸人人網
3.模擬登陸人人網
第六章:高性能異步爬蟲(線程池,協程)
1.aiohttp實現多任務異步爬蟲
2.flask服務
3.多任務協程
4.多任務異步爬蟲
5.示例
6.同步爬蟲
7.線程池基本使用
8.線程池在爬蟲案例中的應用
9.協程
第七章:動態加載數據處理(selenium模塊應用,模擬登錄12306)
1.selenium基礎用法
2.selenium其他自動操作
3.12306登錄示例代碼
4.動作鏈與iframe的處理
5.谷歌無頭瀏覽器+反檢測
6.基于selenium實現1236模擬登錄
7.模擬登錄qq空間
第八章:scrapy框架
1.各種項目實戰,scrapy各種配置修改
2.bossPro示例
3.bossPro示例
4.數據庫示例
第一章 爬蟲介紹
第0關 認識爬蟲
1、初始爬蟲
爬蟲,從本質上來說,就是利用程序在網上拿到對我們有價值的數據。
2、明晰路徑
2-1、瀏覽器工作原理
(1)解析數據:當服務器把數據響應給瀏覽器之后,瀏覽器并不會直接把數據丟給我們。因為這些數據是用計算機的語言寫的,瀏覽器還要把這些數據翻譯成我們能看得懂的內容;
(2)提取數據:我們就可以在拿到的數據中,挑選出對我們有用的數據;
(3)存儲數據:將挑選出來的有用數據保存在某一文件/數據庫中。
2-2、爬蟲工作原理
第二章:requests實戰(基礎爬蟲)
1.豆瓣電影爬取
import requests import json headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.193 Safari/537.36' } url = "https://movie.douban.com/j/chart/top_list" params = { 'type': '24', 'interval_id': '100:90', 'action': '', 'start': '0',#從第幾部電影開始取 'limit': '20'#一次取出的電影的個數 } response = requests.get(url,params = params,headers = headers) list_data = response.json() fp = open('douban.json','w',encoding= 'utf-8') json.dump(list_data,fp = fp,ensure_ascii= False) print('over!!!!')
2.肯德基餐廳查詢
import requests headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.193 Safari/537.36' } url = 'http://www.kfc.com.cn/kfccda/ashx/GetStoreList.ashx?op=keyword' word = input('請輸入一個地址:') params = { 'cname': '', 'pid': '', 'keyword': word, 'pageIndex': '1', 'pageSize': '10' } response = requests.post(url,params = params ,headers = headers) page_text = response.text fileName = word + '.txt' with open(fileName,'w',encoding= 'utf-8') as f: f.write(page_text)
3.破解百度翻譯
import requests import json headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.193 Safari/537.36' } post_url = 'https://fanyi.baidu.com/sug' word = input('enter a word:') data = { 'kw':word } response = requests.post(url = post_url,data = data,headers = headers) dic_obj = response.json() fileName = word + '.json' fp = open(fileName,'w',encoding= 'utf-8') #ensure_ascii = False,中文不能用ascii代碼 json.dump(dic_obj,fp = fp,ensure_ascii = False) print('over!')
4.搜狗首頁
import requests url = 'https://www.sogou.com/?pid=sogou-site-d5da28d4865fb927' response = requests.get(url) page_text = response.text print(page_text) with open('./sougou.html','w',encoding= 'utf-8') as fp: fp.write(page_text) print('爬取數據結束!!!')
5.網頁采集器
import requests headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.193 Safari/537.36' } url = 'https://www.sogou.com/sogou' kw = input('enter a word:') param = { 'query':kw } response = requests.get(url,params = param,headers = headers) page_text = response.text fileName = kw +'.html' with open(fileName,'w',encoding= 'utf-8') as fp: fp.write(page_text) print(fileName,'保存成功!!!')
6.藥監總局相關數據爬取
import requests import json url = "http://scxk.nmpa.gov.cn:81/xk/itownet/portalAction.do?method=getXkzsList" headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4385.0 Safari/537.36' } for page in range(1,6): page = str(page) data = { 'on': 'true', 'page': page, 'pageSize': '15', 'productName':'', 'conditionType': '1', 'applyname': '', 'applysn':'' } json_ids = requests.post(url,data = data,headers = headers).json() id_list = [] for dic in json_ids['list']: id_list.append(dic['ID']) #print(id_list) post_url = 'http://scxk.nmpa.gov.cn:81/xk/itownet/portalAction.do?method=getXkzsById' all_data_list = [] for id in id_list: data = { 'id':id } datail_json = requests.post(url = post_url,data = data,headers = headers).json() #print(datail_json,'---------------------over') all_data_list.append(datail_json) fp = open('allData.json','w',encoding='utf-8') json.dump(all_data_list,fp = fp,ensure_ascii= False) print('over!!!')
第三章:爬蟲數據分析(bs4,xpath,正則表達式)
1.bs4解析基礎
from bs4 import BeautifulSoup fp = open('第三章 數據分析/text.html','r',encoding='utf-8') soup = BeautifulSoup(fp,'lxml') #print(soup) #print(soup.a) #print(soup.div) #print(soup.find('div')) #print(soup.find('div',class_="song")) #print(soup.find_all('a')) #print(soup.select('.tang')) #print(soup.select('.tang > ul > li >a')[0].text) #print(soup.find('div',class_="song").text) #print(soup.find('div',class_="song").string) print(soup.select('.tang > ul > li >a')[0]['href'])
2.bs4案例
from bs4 import BeautifulSoup import requests headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.193 Safari/537.36' } url = "http://sanguo.5000yan.com/" page_text = requests.get(url ,headers = headers).content #print(page_text) soup = BeautifulSoup(page_text,'lxml') li_list = soup.select('.list > ul > li') fp = open('./sanguo.txt','w',encoding='utf-8') for li in li_list: title = li.a.string #print(title) detail_url = 'http://sanguo.5000yan.com/'+li.a['href'] print(detail_url) detail_page_text = requests.get(detail_url,headers = headers).content detail_soup = BeautifulSoup(detail_page_text,'lxml') div_tag = detail_soup.find('div',class_="grap") content = div_tag.text fp.write(title+":"+content+'\n') print(title,'爬取成功?。?!')
3.xpath解析基礎
from lxml import etree tree = etree.parse('第三章 數據分析/text.html') # r = tree.xpath('/html/head/title') # print(r) # r = tree.xpath('/html/body/div') # print(r) # r = tree.xpath('/html//div') # print(r) # r = tree.xpath('//div') # print(r) # r = tree.xpath('//div[@class="song"]') # print(r) # r = tree.xpath('//div[@class="song"]/P[3]') # print(r) # r = tree.xpath('//div[@class="tang"]//li[5]/a/text()') # print(r) # r = tree.xpath('//li[7]/i/text()') # print(r) # r = tree.xpath('//li[7]//text()') # print(r) # r = tree.xpath('//div[@class="tang"]//text()') # print(r) # r = tree.xpath('//div[@class="song"]/img/@src') # print(r)
4.xpath解析案例-4k圖片解析爬取
import requests from lxml import etree import os headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.193 Safari/537.36' } url = 'http://pic.netbian.com/4kmeinv/' response = requests.get(url,headers = headers) #response.encoding=response.apparent_encoding #response.encoding = 'utf-8' page_text = response.text tree = etree.HTML(page_text) li_list = tree.xpath('//div[@class="slist"]/ul/li') # if not os.path.exists('./picLibs'): # os.mkdir('./picLibs') for li in li_list: img_src = 'http://pic.netbian.com/'+li.xpath('./a/img/@src')[0] img_name = li.xpath('./a/img/@alt')[0]+'.jpg' img_name = img_name.encode('iso-8859-1').decode('gbk') # print(img_name,img_src) # print(type(img_name)) img_data = requests.get(url = img_src,headers = headers).content img_path ='picLibs/'+img_name #print(img_path) with open(img_path,'wb') as fp: fp.write(img_data) print(img_name,"下載成功")
5.xpath解析案例-58二手房
import requests from lxml import etree url = 'https://bj.58.com/ershoufang/p2/' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.193 Safari/537.36' } page_text = requests.get(url=url,headers = headers).text tree = etree.HTML(page_text) li_list = tree.xpath('//section[@class="list-left"]/section[2]/div') fp = open('58.txt','w',encoding='utf-8') for li in li_list: title = li.xpath('./a/div[2]/div/div/h3/text()')[0] print(title) fp.write(title+'\n')
6.xpath解析案例-爬取站長素材中免費簡歷模板
import requests from lxml import etree import os headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.193 Safari/537.36' } url = 'https://www.aqistudy.cn/historydata/' page_text = requests.get(url,headers = headers).text
7.xpath解析案例-全國城市名稱爬取
import requests from lxml import etree import os headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.193 Safari/537.36' } url = 'https://www.aqistudy.cn/historydata/' page_text = requests.get(url,headers = headers).text tree = etree.HTML(page_text) # holt_li_list = tree.xpath('//div[@class="bottom"]/ul/li') # all_city_name = [] # for li in holt_li_list: # host_city_name = li.xpath('./a/text()')[0] # all_city_name.append(host_city_name) # city_name_list = tree.xpath('//div[@class="bottom"]/ul/div[2]/li') # for li in city_name_list: # city_name = li.xpath('./a/text()')[0] # all_city_name.append(city_name) # print(all_city_name,len(all_city_name)) #holt_li_list = tree.xpath('//div[@class="bottom"]/ul//li') holt_li_list = tree.xpath('//div[@class="bottom"]/ul/li | //div[@class="bottom"]/ul/div[2]/li') all_city_name = [] for li in holt_li_list: host_city_name = li.xpath('./a/text()')[0] all_city_name.append(host_city_name) print(all_city_name,len(all_city_name))
8.正則解析
import requests import re import os if not os.path.exists('./qiutuLibs'): os.mkdir('./qiutuLibs') url = 'https://www.qiushibaike.com/imgrank/' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4385.0 Safari/537.36' } page_text = requests.get(url,headers = headers).text ex = '
.content with open('qiutu.jpg','wb') as fp: fp.write(img_data)</p><p>第四章:自動識別驗證碼</p><p>1.古詩文網驗證碼識別</p><p>開發者賬號密碼可以申請</p><p>import requests from lxml import etree from fateadm_api import FateadmApi def TestFunc(imgPath,codyType): pd_id = )
fateadm_api.py(識別需要的配置,建議放在同一文件夾下)
調用api接口
字數20000字限制,無法上傳更多,見諒,公眾號上有完整資源。
個人公眾號 yk 坤帝
后臺回復 scrapy 獲取整理資源
Python 爬蟲
版權聲明:本文內容由網絡用戶投稿,版權歸原作者所有,本站不擁有其著作權,亦不承擔相應法律責任。如果您發現本站中有涉嫌抄襲或描述失實的內容,請聯系我們jiasou666@gmail.com 處理,核實后本網站將在24小時內刪除侵權內容。
版權聲明:本文內容由網絡用戶投稿,版權歸原作者所有,本站不擁有其著作權,亦不承擔相應法律責任。如果您發現本站中有涉嫌抄襲或描述失實的內容,請聯系我們jiasou666@gmail.com 處理,核實后本網站將在24小時內刪除侵權內容。