1000字范文,内容丰富有趣,学习的好帮手!
1000字范文 > 3.32 小猪短租的爬虫-

3.32 小猪短租的爬虫-

时间:2024-05-26 08:23:29

相关推荐

3.32 小猪短租的爬虫-

够造主网页的url获取函数,从主网页中获取到详情页的链接,从详情页中获取到标题,价格,评论等内容

这里因为ip反爬,返回的是错误的网页,所以后续失败

#这里进行了反爬,返回的网页进行了重定向,不是自己要爬的网址from bs4 import BeautifulSoupimport requestsimport time#导入相应的库文件url ="/fangzi/1047842478.html"headers = {"Cookie": "abtest_ABTest4SearchDate=b; sajssdk__cross_new_user=1; distinctId=17663eb00672c9-0d67d3dfd2265d-e726559-2073600-17663eb006841a; Hm_lvt_92e8bc890f374994dd570aa15afc99e1=1607994115,1608023687; xzuuid=87961465; xzuinfo=%7B%22user_id%22%3A153018699197%2C%22user_name%22%3A%2217317126846%22%2C%22user_key%22%3A%223d865d010085%22%2C%22user_nickName%22%3A%22wangwangluo123%22%7D; xzucode=1e98f258b6137a484cf910d72d023371; xzucode4im=ac7725f797e9e2a2b0ad8cdbe1351291; xztoken=WyIwMTA1MTIyNjE1V0xoRCIseyJ1c2VyaWQiOjE1MzAxODY5OTE5NywiZXhwaXJlIjowLCJjIjoid2ViIn0sImZmMTk3MWQ0MDg4ZWNiYjA1MTU1Nzc1ZGQ3YWYzY2RhIl0%3D; xzSessId4H5=b5a5b64d28b22fc6567fdbe586a5770c; _pykey_=ed9c883e-5526-519d-801c-4be4c37724ca; sensorsdatajssdkcross=%7B%22distinct_id%22%3A%22153018699197%22%2C%22first_id%22%3A%2217663eb00672c9-0d67d3dfd2265d-e726559-2073600-17663eb006841a%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E8%87%AA%E7%84%B6%E6%90%9C%E7%B4%A2%E6%B5%81%E9%87%8F%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC%22%2C%22%24latest_referrer%22%3A%22https%3A%2F%%2Flink%22%7D%2C%22%24device_id%22%3A%2217663eb00511d9-0a4d3fd6b7de7e-e726559-2073600-17663eb005298%22%7D; rule_math=tckf4hwakbq; Hm_lpvt_92e8bc890f374994dd570aa15afc99e1=1608024368","Referer": "/fangzi/1047842478.html","User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36 SE 2.X MetaSr 1.0","accept": "*/*","accept-encoding": "gzip, deflate, br","accept-language": "zh-CN,zh;q=0.9","access-control-request-headers": "content-type","access-control-request-method": "GET","sec-fetch-dest": "empty","sec-fetch-mode": "cors","sec-fetch-site": "same-site"}#加入请求头def get_info(url):wb_data = requests.get(url, headers = headers)print(wb_data.text)soup = BeautifulSoup(wb_data.text, "lxml")titles = soup.select("#page_list > ul > li:nth-of-type(1) > div.result_btm_con.lodgeunitname > div:nth-child(1) > span > i ")print(titles)get_info(url)

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。