网站百度排名查询,Python百度关键词搜索排名查询工具

想要知晓seo效果,流量和排名都是缺一不可的关键因素,对于流量情况,我们可以通过百度统计或者其他统计工具知晓,而关于网站排名的查询及监控,除开一些seo工具,比如站长工具,爱站,5118等第三方工具之外,你还可以有自己的选择,应用python写一个自己的网站百度排名查询工具。

图片[1]-网站百度排名查询,Python百度关键词搜索排名查询工具-JieYingAI捷鹰AI

baiduseo

其实想要知晓网站排名情况还是比较简单,自己手动在百度搜索查询,找到自己的网站即可,而我们写脚本工具也是实现同样的效果,原理是一样的!

我们来实现这样的效果!

其实写了不少版本,最最最重要的就是协议的处理了,因为懒(其实没心思调),这里直接整个拿过来了!

图片[2]-网站百度排名查询,Python百度关键词搜索排名查询工具-JieYingAI捷鹰AI

百度SEO

版本一:百度搜索结果抓取

这个版本是为一个老铁写的,使用了头大的re正则匹配,当然bug是存在不少,竞价的样式没有排除掉,凑合着用(看)吧!

几个关键点:

1.headers 协议头,前面已经说过了!

headers3={    'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',    'Accept-Encoding':'gzip, deflate, br',    'Accept-Language':'zh-CN,zh;q=0.9',    'Cache-Control':'max-age=0',    'Connection':'keep-alive',    'Cookie':'PSTM=1558160815; BIDUPSID=EDB23C4462B823EBF68459121BA2015A; sug=0; ORIGIN=0; bdime=0; BAIDUID=9DF4963AB10D49C918954437F25DE026:SL=0:NR=10:FG=1; sugstore=1; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; BD_UPN=12314753; delPer=0; BD_HOME=0; H_PS_PSSID=1422_21118_30210_30327_30283_26350_22159; BD_CK_SAM=1; PSINO=6; H_PS_645EC=649eN9CrXiFNm04eUied5%2FE5lhjZRYGKt0vCTPMT1R1rXmUvvEMhLXuYOZc; COOKIE_SESSION=809_0_7_9_0_4_0_0_7_4_1_0_0_0_0_0_0_0_1577954827%7C9%2385006_54_1574240073%7C9; ispeed_lsm=2; BDSVRTM=563; WWW_ST=1577954856013',    'Host':'www.baidu.com',    'Upgrade-Insecure-Requests':'1',    'User-Agent':'Mozilla/5.0 (iPhone; CPU iPhone OS 10_3 like Mac OS X) AppleWebKit/602.1.50 (KHTML, like Gecko) CriOS/56.0.2924.75 Mobile/14E5239e Safari/602.1',}

2.百度跳转真实网址的获取

headers['Location']
def get_trueurl(url):    try:        r = requests.head(url, stream=True)        zsurl = r.headers['Location']    except:        zsurl = None    return zsurl

3.数据的写入保存

csv版本

#写入csvdef write_csv(data):    with open('{}_csv_search_results.csv'.format(keyword),'a+') as f:        f.write('%s%s' % (data,'\n'))

excel版本

#数据写入excle表格def write_to_xlsx(keyword,data_lists):    workbook = xlsxwriter.Workbook('{}_excel_search_results.xlsx'.format(keyword))  # 创建一个Excel文件    worksheet = workbook.add_worksheet(keyword)    title = ['网页标题', '网页地址']  # 表格title    worksheet.write_row('A1', title)    for index, data in enumerate(data_lists):        num0 = str(index + 2)        row = 'A' + num0        worksheet.write_row(row, data)    workbook.close()    print("搜索结果数据写入excel成功!")

附完整源码:

#百度收录结果抓取#20200102 by 微信:huguo002# -*- coding: UTF-8 -*-
import requestsimport re,timeimport xlsxwriter

headers3={    'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',    'Accept-Encoding':'gzip, deflate, br',    'Accept-Language':'zh-CN,zh;q=0.9',    'Cache-Control':'max-age=0',    'Connection':'keep-alive',    'Cookie':'PSTM=1558160815; BIDUPSID=EDB23C4462B823EBF68459121BA2015A; sug=0; ORIGIN=0; bdime=0; BAIDUID=9DF4963AB10D49C918954437F25DE026:SL=0:NR=10:FG=1; sugstore=1; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; BD_UPN=12314753; delPer=0; BD_HOME=0; H_PS_PSSID=1422_21118_30210_30327_30283_26350_22159; BD_CK_SAM=1; PSINO=6; H_PS_645EC=649eN9CrXiFNm04eUied5%2FE5lhjZRYGKt0vCTPMT1R1rXmUvvEMhLXuYOZc; COOKIE_SESSION=809_0_7_9_0_4_0_0_7_4_1_0_0_0_0_0_0_0_1577954827%7C9%2385006_54_1574240073%7C9; ispeed_lsm=2; BDSVRTM=563; WWW_ST=1577954856013',    'Host':'www.baidu.com',    'Upgrade-Insecure-Requests':'1',    'User-Agent':'Mozilla/5.0 (iPhone; CPU iPhone OS 10_3 like Mac OS X) AppleWebKit/602.1.50 (KHTML, like Gecko) CriOS/56.0.2924.75 Mobile/14E5239e Safari/602.1',}
#百度真实网址获取def get_trueurl(url):    try:        r = requests.head(url, stream=True)        zsurl = r.headers['Location']    except:        zsurl = None    return zsurl
#写入csvdef write_csv(data):    with open('{}_csv_search_results.csv'.format(keyword),'a+') as f:        f.write('%s%s' % (data,'\n'))
#数据写入excle表格def write_to_xlsx(keyword,data_lists):    workbook = xlsxwriter.Workbook('{}_excel_search_results.xlsx'.format(keyword))  # 创建一个Excel文件    worksheet = workbook.add_worksheet(keyword)    title = ['网页标题', '网页地址']  # 表格title    worksheet.write_row('A1', title)    for index, data in enumerate(data_lists):        num0 = str(index + 2)        row = 'A' + num0        worksheet.write_row(row, data)    workbook.close()    print("搜索结果数据写入excel成功!")
def get_search(keyword,num):    data_lists=[]    for i in range(0,num):        print(f">>> 正在查询第{i+1}页搜索结果...")        page=i*10        #keyword="微信搜索"        url="https://www.baidu.com/s?wd=%s&ie=UTF-8&pn=%d" % (keyword,page)        response=requests.get(url,headers=headers2).content.decode('utf-8')        time.sleep(1)        content_left=re.findall(r'<div id="content_left">(.+?)<div id="rs">',response,re.S)[0]        h3s=re.findall(r'<h3 class=".+?">(.+?)</h3>',content_left,re.S)        print(len(h3s))        for h3 in h3s:            if "WleiRf" not in h3 and "stdWZk" not in h3:                #print(h3)                title=re.findall(r'<a.+?>(.+?)</a>',h3,re.S)[0]                title=title.replace('<em>','').replace('</em>','')                titlecsv = title.replace(',', '-')                try:                    href = re.findall(r'href = "(.+?)"', h3, re.S)[0]                except:                    href=re.findall(r'href="(.+?)"',h3,re.S)[0]                site_url=get_trueurl(href)                print(title,site_url)                data_list=(title,site_url)                data='%s%s%s'%(titlecsv,',',site_url)                write_csv(data)                data_lists.append(data_list)            time.sleep(6)
    write_to_xlsx(keyword, data_lists)

if __name__=="__main__":    while True:        keyword = input('请输入要查询的关键词:')        num = input('请输入要查询的页码数:')        num=int(num)        try:            get_search(keyword, num)            print("\n")            print(f">>> {keyword} 百度搜索结果查询完毕,数据已保存!")            print(f">>> 如需重新查询,请直接输入,否则可关闭程序!")            print("\n")        except:            print("请关闭程序,重试!")

运行效果:

图片[3]-网站百度排名查询,Python百度关键词搜索排名查询工具-JieYingAI捷鹰AI

 

图片[4]-网站百度排名查询,Python百度关键词搜索排名查询工具-JieYingAI捷鹰AI

图片[5]-网站百度排名查询,Python百度关键词搜索排名查询工具-JieYingAI捷鹰AI

附上exe打包程序:

链接:

https://pan.baidu.com/s/1ApAPQe-U-R-uQ4gdRCqWOg

提取码:

f8ms

当然速度是真的慢!!!

还有竞价乱码样式!!!

当然这个版本没有匹配目标网址!!

版本二:网站关键词百度排名查询

这里应用的是bs4获取

前面都总结过,直接上源码吧,感兴趣的慢慢看:

#关键词百度排名查询#20191121 by 微信:huguo00289# -*- coding: UTF-8 -*-
import requests,timefrom bs4 import BeautifulSoup
headers3={    'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',    'Accept-Encoding':'gzip, deflate, br',    'Accept-Language':'zh-CN,zh;q=0.9',    'Cache-Control':'max-age=0',    'Connection':'keep-alive',    'Cookie':'PSTM=1558160815; BIDUPSID=EDB23C4462B823EBF68459121BA2015A; sug=0; ORIGIN=0; bdime=0; BAIDUID=9DF4963AB10D49C918954437F25DE026:SL=0:NR=10:FG=1; sugstore=1; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; BD_UPN=12314753; delPer=0; BD_HOME=0; H_PS_PSSID=1422_21118_30210_30327_30283_26350_22159; BD_CK_SAM=1; PSINO=6; H_PS_645EC=649eN9CrXiFNm04eUied5%2FE5lhjZRYGKt0vCTPMT1R1rXmUvvEMhLXuYOZc; COOKIE_SESSION=809_0_7_9_0_4_0_0_7_4_1_0_0_0_0_0_0_0_1577954827%7C9%2385006_54_1574240073%7C9; ispeed_lsm=2; BDSVRTM=563; WWW_ST=1577954856013',    'Host':'www.baidu.com',    'Upgrade-Insecure-Requests':'1',    'User-Agent':'Mozilla/5.0 (iPhone; CPU iPhone OS 10_3 like Mac OS X) AppleWebKit/602.1.50 (KHTML, like Gecko) CriOS/56.0.2924.75 Mobile/14E5239e Safari/602.1',}

#获取百度跳转真实网址def get_trueurl(url):    r = requests.head(url, stream=True)    zsurl = r.headers['Location']    return zsurl
#获取网页信息def get_response(url):    response = requests.get(url, headers=headers2, timeout=10)    print(f'状态码:{response.status_code}')    time.sleep(2)    response.encoding='utf-8'    req=response.text    return req
#查询网址是否存在排名def cxwz(keyword,i,pm,title,zsurl,cxurl,href):    if cxurl in zsurl:        cxjg = f'关键词:{keyword},页码:第{i + 1}页,排名:{pm},标题:{title},网址:{zsurl},百度链接:{href}'        print(f'关键词:{keyword},页码:第{i + 1}页,排名:{pm},标题:{title},网址:{zsurl},百度链接:{href}')    else:        cxjg=[]    return cxjg

#查询排名def get_bdpm(keyword,num,cxurl):    jg=[]    """    #转换为utf-8编码    key_word = urllib.parse.quote(keyword)    print(key_word)    """    for i in range(0,int(num)):        print(f'正在查询{i + 1}页排名...')        ym=i * 10        url=f"https://www.baidu.com/s?wd={keyword}&ie=UTF-8&pn={ym}"        print(url)        req=get_response(url)        #print(req)        soup=BeautifulSoup(req,'lxml')        divs=soup.find('div',id="content_left").find_all('div')        for div in divs:            if 'class="result'in str(div):                pm=div['id']                title=div.find('a').get_text()                href=div.find('a')['href']                zsurl=get_trueurl(href)                print(pm,title,zsurl)                cxjg=cxwz(keyword, i, pm, title, zsurl, cxurl, href)                if cxjg !=[]:                    jg.append(cxjg)        time.sleep(5)
    print("排名查询结果:")    print("-----------------------------------------")    if jg==[]:        print("该关键词无排名!")    else:        print('\n'.join(jg))    print("-----------------------------------------")    print("查询排名结束")
if __name__ == '__main__':    while True:        keyword =input('请输入要查询的关键词:')        num = input('请输入要查询的页码数:')        url = input('请输入要查询的主域名:')        try:            get_bdpm(keyword,num,url)        except IndexError as e:            print(e)            print("查询排名失败!")

与第一个版本相比,少了数据的保存,多了目标网址的查找!

而且基本都取到想要的搜索结果数据了!

运行效果:

图片[6]-网站百度排名查询,Python百度关键词搜索排名查询工具-JieYingAI捷鹰AI

图片[7]-网站百度排名查询,Python百度关键词搜索排名查询工具-JieYingAI捷鹰AI

图片[8]-网站百度排名查询,Python百度关键词搜索排名查询工具-JieYingAI捷鹰AI

版本三:百度新闻搜索json接口数据查询

限制小但是新闻排名不是网页搜索排名!

json 接口:

https://www.baidu.com/s?wd=关键词&pn=50&rn=50&tn=json

wd:关键词
pn :  查询偏移位置
rn:  每页显示多少条,默认为10条,最多50条

完整源码:

#json接口百度新闻搜索查询def jpmcx(keyword):    """    wd:关键词    pn :  查询偏移位置    rn:  每页显示多少条,默认为10条,最多50条    """    url="https://www.baidu.com/s?wd=%s&pn=50&rn=50&tn=json"% keyword    response=requests.get(url,headers=headers3,timeout=10).content.decode('utf-8')    req=json.loads(response)    datas=req['feed']['entry']    for data in datas:        if data !={}:            #print(data)            pm=data['pn']  #排名            title=data['title'] #网页标题            url=data['url']  #网页链接            description=data['abs'] #描述            print(pm,title,url,description)

这里最后再一个提醒

如果你的查询结果和手动网页查询结果存在差异,

那么,

不妨对比参照一下协议头是否相同,尤其是 是否登录了百度账号

是否存在百度账号的cookies!

当然,如果你查询频率过快,过频繁,百度会不会鸟你(封禁),不妨自己去尝试,我就不奉陪了!!

© 版权声明
THE END
喜欢就支持一下吧
点赞0 分享