背景说明

感觉微信公众号算得是比较难爬的平台之一,不过一番折腾之后还是小有收获的。没有用Scrapy(估计爬太快也有反爬限制),但后面会开始整理写一些实战出来。简单介绍下本次的开发环境:

  • python3
  • requests
  • psycopg2 (操作postgres数据库)

抓包分析

本次实战对抓取的公众号没有限制,但不同公众号每次抓取之前都要进行分析。打开Fiddler,将手机配置好相关代理,为避免干扰过多,这里给Fiddler加个过滤规则,只需要指定微信域名mp.weixin.qq.com就好:

Python如何爬取微信公众号文章和评论(基于 Fiddler 抓包分析)

Fiddler配置Filter规则

平时关注的公众号也比较多,本次实战以“36氪”公众号为例,继续往下看:

Python如何爬取微信公众号文章和评论(基于 Fiddler 抓包分析)

“36氪”公众号

Python如何爬取微信公众号文章和评论(基于 Fiddler 抓包分析)

公众号右上角 -> 全部消息

在公众号主页,右上角有三个实心圆点,点击进入消息界面,下滑找到并点击“全部消息”,往下请求加载几次历史文章,然后回到Fiddler界面,不出意外的话应该可以看到这几次请求,可以看到返回的数据是json格式的,同时文章数据是以json字符串的形式定义在general_msg_list字段中:

Python如何爬取微信公众号文章和评论(基于 Fiddler 抓包分析)

公众号文章列表抓包请求

分析文章列表接口

把请求URL和Cookie贴上来进行分析:

https://mp.weixin.qq.com/mp/profile_ext"htmlcode">
{
"ret": 0,
"errmsg": "ok",
"msg_count": 10,
"can_msg_continue": 1,
"general_msg_list": "{\"list\":[{\"comm_msg_info\":{\"id\":1000005700,\"type\":49,\"datetime\":1535100943,\"fakeid\":\"3264997043\",\"status\":2,\"content\":\"\"},\"app_msg_ext_info\":{\"title\":\"金融危机又十年:钱荒之下,二手基金迎来高光时刻\",\"digest\":\"退出永远是基金的主旋律。\",\"content\":\"\",\"fileid\":100034824,\"content_url\":\"http:\\/\\/mp.weixin.qq.com\\/s",\"source_url\":\"\",\"cover\":\"http:\\/\\/mmbiz.qpic.cn\\/mmbiz_jpg\\/QicyPhNHD5vYgdpprkibtnWCAN7l4ZaqibKvopNyCWWLQAwX7QpzWicnQSVfcBZmPrR5YuHS45JIUzVjb0dZTiaLPyA\\/0",\"subtype\":9,\"is_multi\":0,\"multi_app_msg_item_list\":[],\"author\":\"石亚琼\",\"copyright_stat\":11,\"duration\":0,\"del_flag\":1,\"item_show_type\":0,\"audio_fileid\":0,\"play_url\":\"\",\"malicious_title_reason_id\":0,\"malicious_content_type\":0}}]}",
"next_offset": 20,
"video_count": 1,
"use_video_tab": 1,
"real_type": 0
}

可以简单抽取想要的数据,这里将文章表结构定义如下,顺便贴上建表的SQL语句:

Python如何爬取微信公众号文章和评论(基于 Fiddler 抓包分析)

文章数据表

-- ----------------------------
-- Table structure for tb_article
-- ----------------------------
DROP TABLE IF EXISTS "public"."tb_article";
CREATE TABLE "public"."tb_article" (
"id" serial4 PRIMARY KEY,
"msg_id" int8 NOT NULL,
"title" varchar(200) COLLATE "pg_catalog"."default" NOT NULL,
"author" varchar(20) COLLATE "pg_catalog"."default",
"cover" varchar(500) COLLATE "pg_catalog"."default",
"digest" varchar(200) COLLATE "pg_catalog"."default",
"source_url" varchar(800) COLLATE "pg_catalog"."default",
"content_url" varchar(600) COLLATE "pg_catalog"."default" NOT NULL,
"post_time" timestamp(6),
"create_time" timestamp(6) NOT NULL
)
;
COMMENT ON COLUMN "public"."tb_article"."id" IS '自增主键';
COMMENT ON COLUMN "public"."tb_article"."msg_id" IS '消息id (唯一)';
COMMENT ON COLUMN "public"."tb_article"."title" IS '标题';
COMMENT ON COLUMN "public"."tb_article"."author" IS '作者';
COMMENT ON COLUMN "public"."tb_article"."cover" IS '封面图';
COMMENT ON COLUMN "public"."tb_article"."digest" IS '关键字';
COMMENT ON COLUMN "public"."tb_article"."source_url" IS '原文地址';
COMMENT ON COLUMN "public"."tb_article"."content_url" IS '文章地址';
COMMENT ON COLUMN "public"."tb_article"."post_time" IS '发布时间';
COMMENT ON COLUMN "public"."tb_article"."create_time" IS '入库时间';
COMMENT ON TABLE "public"."tb_article" IS '公众号文章表';
-- ----------------------------
-- Indexes structure for table tb_article
-- ----------------------------
CREATE UNIQUE INDEX "unique_msg_id" ON "public"."tb_article" USING btree (
"msg_id" "pg_catalog"."int8_ops" ASC NULLS LAST
);

附请求文章接口并解析数据保存到数据库的相关代码:

class WxMps(object):
"""微信公众号文章、评论抓取爬虫"""

def __init__(self, _biz, _pass_ticket, _app_msg_token, _cookie, _offset=0):
self.offset = _offset
self.biz = _biz # 公众号标志
self.msg_token = _app_msg_token # 票据(非固定)
self.pass_ticket = _pass_ticket # 票据(非固定)
self.headers = {
'Cookie': _cookie, # Cookie(非固定)
'User-Agent': 'Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.2987.132 '
}
wx_mps = 'wxmps' # 这里数据库、用户、密码一致(需替换成实际的)
self.postgres = pgs.Pgs(host='localhost', port='5432', db_name=wx_mps, user=wx_mps, password=wx_mps)

def start(self):
"""请求获取公众号的文章接口"""

offset = self.offset
while True:
api = 'https://mp.weixin.qq.com/mp/profile_ext"""解析嵌套文章数据并保存入库"""
title = info.get('title') # 标题
cover = info.get('cover') # 封面图
author = info.get('author') # 作者
digest = info.get('digest') # 关键字
source_url = info.get('source_url') # 原文地址
content_url = info.get('content_url') # 微信地址
# ext_data = json.dumps(info, ensure_ascii=False) # 原始数据
self.postgres.handler(self._save_article(), (msg_id, title, author, cover, digest,
source_url, content_url, post_time,
datetime.now()), fetch=True)
@staticmethod
def _save_article():
sql = 'insert into tb_article(msg_id,title,author,cover,digest,source_url,content_url,post_time,create_time) ' 'values(%s,%s,%s,%s,%s,%s,%s,%s,%s)'
return sql 
if __name__ == '__main__':
biz = 'MzI2NDk5NzA0Mw==' # "36氪"
pass_ticket = 'NDndxxaZ7p6Z9PYulWpLqMbI0i3ULFeCPIHBFu1sf5pX2IhkGfyxZ6b9JieSYRUy'
app_msg_token = '971_Z0lVNQBcGsWColSubRO9H13ZjrPhjuljyxLtiQ~~'
cookie = 'wap_sid2=CO3YwOQHEogBQnN4VTNhNmxQWmc3UHI2U3kteWhUeVExZHFVMnN0QXlsbzVJRUJKc1pkdVFUU2Y5UzhSVEtOZmt1VVlYTkR4SEllQ2huejlTTThJWndMQzZfYUw2SldLVGVMQUthUjc3QWdVMUdoaGN0Nml2SU05cXR1dTN2RkhRUVd1V2Y3SFJ5d01BQUF+fjCB1pLcBTgNQJVO'
# 以上信息不同公众号每次抓取都需要借助抓包工具做修改
wxMps = WxMps(biz, pass_ticket, app_msg_token, cookie)
wxMps.start() # 开始爬取文章

分析文章评论接口

获取评论的思路大致是一样的,只是会更加麻烦一点。首先在手机端点开一篇有评论的文章,然后查看Fiddler抓取的请求:

Python如何爬取微信公众号文章和评论(基于 Fiddler 抓包分析)

公众号文章评论

Python如何爬取微信公众号文章和评论(基于 Fiddler 抓包分析)

公众号文章评论接口抓包请求

提取其中的URL和Cookie再次分析:

https://mp.weixin.qq.com/mp/appmsg_comment"htmlcode">
def _parse_article_detail(self, content_url, article_id):
"""从文章页提取相关参数用于获取评论,article_id是已保存的文章id"""
try:
api = content_url.replace('amp;', '').replace('#wechat_redirect', '').replace('http', 'https')
html = requests.get(api, headers=self.headers).text
except:
print('获取评论失败' + content_url)
else:
# group(0) is current line
str_comment = re.search(r'var comment_id = "(.*)" \|\| "(.*)" \* 1;', html)
str_msg = re.search(r"var appmsgid = '' \|\| '(.*)'\|\|", html)
str_token = re.search(r'window.appmsg_token = "(.*)";', html)
if str_comment and str_msg and str_token:
comment_id = str_comment.group(1) # 评论id(固定)
app_msg_id = str_msg.group(1) # 票据id(非固定)
appmsg_token = str_token.group(1) # 票据token(非固定)

再回来看该接口返回的json数据,分析结构后然后定义数据表(含SQL):

Python如何爬取微信公众号文章和评论(基于 Fiddler 抓包分析)

文章评论数据表

-- ----------------------------
-- Table structure for tb_article_comment
-- ----------------------------
DROP TABLE IF EXISTS "public"."tb_article_comment";
CREATE TABLE "public"."tb_article_comment" (
"id" serial4 PRIMARY KEY,
"article_id" int4 NOT NULL,
"comment_id" varchar(50) COLLATE "pg_catalog"."default",
"nick_name" varchar(50) COLLATE "pg_catalog"."default" NOT NULL,
"logo_url" varchar(300) COLLATE "pg_catalog"."default",
"content_id" varchar(50) COLLATE "pg_catalog"."default" NOT NULL,
"content" varchar(3000) COLLATE "pg_catalog"."default" NOT NULL,
"like_num" int2,
"comment_time" timestamp(6),
"create_time" timestamp(6) NOT NULL
)
;
COMMENT ON COLUMN "public"."tb_article_comment"."id" IS '自增主键';
COMMENT ON COLUMN "public"."tb_article_comment"."article_id" IS '文章外键id';
COMMENT ON COLUMN "public"."tb_article_comment"."comment_id" IS '评论接口id';
COMMENT ON COLUMN "public"."tb_article_comment"."nick_name" IS '用户昵称';
COMMENT ON COLUMN "public"."tb_article_comment"."logo_url" IS '头像地址';
COMMENT ON COLUMN "public"."tb_article_comment"."content_id" IS '评论id (唯一)';
COMMENT ON COLUMN "public"."tb_article_comment"."content" IS '评论内容';
COMMENT ON COLUMN "public"."tb_article_comment"."like_num" IS '点赞数';
COMMENT ON COLUMN "public"."tb_article_comment"."comment_time" IS '评论时间';
COMMENT ON COLUMN "public"."tb_article_comment"."create_time" IS '入库时间';
COMMENT ON TABLE "public"."tb_article_comment" IS '公众号文章评论表';
-- ----------------------------
-- Indexes structure for table tb_article_comment
-- ----------------------------
CREATE UNIQUE INDEX "unique_content_id" ON "public"."tb_article_comment" USING btree (
"content_id" COLLATE "pg_catalog"."default" "pg_catalog"."text_ops" ASC NULLS LAST
);

万里长征快到头了,最后贴上这部分代码,由于要先获取文章地址,所以和上面获取文章数据的代码是一起的:

import json
import re
import time
from datetime import datetime

import requests

from utils import pgs


class WxMps(object):
"""微信公众号文章、评论抓取爬虫"""

def __init__(self, _biz, _pass_ticket, _app_msg_token, _cookie, _offset=0):
self.offset = _offset
self.biz = _biz # 公众号标志
self.msg_token = _app_msg_token # 票据(非固定)
self.pass_ticket = _pass_ticket # 票据(非固定)
self.headers = {
'Cookie': _cookie, # Cookie(非固定)
'User-Agent': 'Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.2987.132 '
}
wx_mps = 'wxmps' # 这里数据库、用户、密码一致(需替换成实际的)
self.postgres = pgs.Pgs(host='localhost', port='5432', db_name=wx_mps, user=wx_mps, password=wx_mps)

def start(self):
"""请求获取公众号的文章接口"""

offset = self.offset
while True:
api = 'https://mp.weixin.qq.com/mp/profile_ext"""解析嵌套文章数据并保存入库"""

title = info.get('title') # 标题
cover = info.get('cover') # 封面图
author = info.get('author') # 作者
digest = info.get('digest') # 关键字
source_url = info.get('source_url') # 原文地址
content_url = info.get('content_url') # 微信地址
# ext_data = json.dumps(info, ensure_ascii=False) # 原始数据

content_url = content_url.replace('amp;', '').replace('#wechat_redirect', '').replace('http', 'https')
article_id = self.postgres.handler(self._save_article(), (msg_id, title, author, cover, digest,
source_url, content_url, post_time,
datetime.now()), fetch=True)
if article_id:
self._parse_article_detail(content_url, article_id)

def _parse_article_detail(self, content_url, article_id):
"""从文章页提取相关参数用于获取评论,article_id是已保存的文章id"""

try:
html = requests.get(content_url, headers=self.headers).text
except:
print('获取评论失败' + content_url)
else:
# group(0) is current line
str_comment = re.search(r'var comment_id = "(.*)" \|\| "(.*)" \* 1;', html)
str_msg = re.search(r"var appmsgid = '' \|\| '(.*)'\|\|", html)
str_token = re.search(r'window.appmsg_token = "(.*)";', html)

if str_comment and str_msg and str_token:
comment_id = str_comment.group(1) # 评论id(固定)
app_msg_id = str_msg.group(1) # 票据id(非固定)
appmsg_token = str_token.group(1) # 票据token(非固定)

# 缺一不可
if appmsg_token and app_msg_id and comment_id:
print('Crawl article comments: ' + content_url)
self._crawl_comments(app_msg_id, comment_id, appmsg_token, article_id)

def _crawl_comments(self, app_msg_id, comment_id, appmsg_token, article_id):
"""抓取文章的评论"""

api = 'https://mp.weixin.qq.com/mp/appmsg_comment"36氪"
pass_ticket = 'NDndxxaZ7p6Z9PYulWpLqMbI0i3ULFeCPIHBFu1sf5pX2IhkGfyxZ6b9JieSYRUy'
app_msg_token = '971_Z0lVNQBcGsWColSubRO9H13ZjrPhjuljyxLtiQ~~'
cookie = 'wap_sid2=CO3YwOQHEogBQnN4VTNhNmxQWmc3UHI2U3kteWhUeVExZHFVMnN0QXlsbzVJRUJKc1pkdVFUU2Y5UzhSVEtOZmt1VVlYTkR4SEllQ2huejlTTThJWndMQzZfYUw2SldLVGVMQUthUjc3QWdVMUdoaGN0Nml2SU05cXR1dTN2RkhRUVd1V2Y3SFJ5d01BQUF+fjCB1pLcBTgNQJVO'
# 以上信息不同公众号每次抓取都需要借助抓包工具做修改
wxMps = WxMps(biz, pass_ticket, app_msg_token, cookie)
wxMps.start() # 开始爬取文章及评论

文末小结

最后展示下数据库里的数据,单线程爬的慢而且又没这方面的数据需求,所以也只是随便试了下手:

Python如何爬取微信公众号文章和评论(基于 Fiddler 抓包分析)

抓取的部分数据

有时候写爬虫是个细心活,如果觉得太麻烦的话,推荐了解下WechatSogou这个工具。有问题的欢迎底部留言讨论。

完整代码:GitHub

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。

标签:
python,爬取微信公众号,fiddler

免责声明:本站文章均来自网站采集或用户投稿,网站不提供任何软件下载或自行开发的软件! 如有用户或公司发现本站内容信息存在侵权行为,请邮件告知! 858582#qq.com

RTX 5090要首发 性能要翻倍!三星展示GDDR7显存

三星在GTC上展示了专为下一代游戏GPU设计的GDDR7内存。

首次推出的GDDR7内存模块密度为16GB,每个模块容量为2GB。其速度预设为32 Gbps(PAM3),但也可以降至28 Gbps,以提高产量和初始阶段的整体性能和成本效益。

据三星表示,GDDR7内存的能效将提高20%,同时工作电压仅为1.1V,低于标准的1.2V。通过采用更新的封装材料和优化的电路设计,使得在高速运行时的发热量降低,GDDR7的热阻比GDDR6降低了70%。