本文所述实例可以实现基于Python的查看图片报纸《参考消息》并将当天的图片报纸自动下载到本地供查看的功能,具体实现代码如下:
# coding=gbk
import urllib2
import socket
import re
import time
import os
# timeout in seconds
#timeout = 10
#socket.setdefaulttimeout(timeout)
timeout = 10
urllib2.socket.setdefaulttimeout(timeout)
home_url = "http://www.hqck.net"
home_page = ""
try:
  home_page_context = urllib2.urlopen(home_url)
  home_page = home_page_context.read()
  print "Read home page finishd."
  print "-------------------------------------------------"
except urllib2.URLError,e:
  print e.code
  exit()
except:
  print e.code
  exit()
reg_str = r'<a class="item-baozhi" href="/arc/jwbt/ckxx/\d{4}/\d{4}/\w+\.html" rel="external nofollow" ><span class.+>.+</span></a>'
news_url_reg = re.compile(reg_str)
today_cankao_news = news_url_reg.findall(home_page)
if len(today_cankao_news) == 0:
  print "Cannot find today's news!"
  exit()
my_news = today_cankao_news[0]
print "Latest news link = " + my_news
print
url_s = my_news.find("/arc/")
url_e = my_news.find(".html")
url_e = url_e + 5
print "Link index = [" + str(url_s) + "," + str(url_e) + "]"
my_news = my_news[url_s:url_e]
print "part url = " + my_news
full_news_url = home_url + my_news
print "full url = " + full_news_url
print
image_folder = "E:\\new_folder\\"
if (os.path.exists(image_folder) == False):
  os.makedirs(image_folder)
today_num = time.strftime('%Y-%m-%d',time.localtime(time.time()))
image_folder = image_folder + today_num + "\\"
if (os.path.exists(image_folder) == False):
  os.makedirs(image_folder)
print "News image folder = " + image_folder
print
context_uri = full_news_url[0:-5]
first_page_url = context_uri + ".html"
try:
  first_page_context = urllib2.urlopen(first_page_url)
  first_page = first_page_context.read()
except urllib2.HTTPError, e:
  print e.code
  exit()
tot_page_index = first_page.find("共")
tot_page_index = tot_page_index
tmp_str = first_page[tot_page_index:tot_page_index+10]
end_s = tmp_str.find("页")
page_num = tmp_str[2:end_s]
print page_num
page_count = int(page_num)
print "Total " + page_num + " pages:"
print
page_index = 1
download_suc = True
while page_index <= page_count:
  page_url = context_uri
  if page_index > 1:
    page_url = page_url + "_" + str(page_index)
  page_url = page_url + ".html"
  print "News page link = " + page_url
  try:
    news_img_page_context = urllib2.urlopen(page_url)
  except urllib2.URLError,e:
    print e.reason
    download_suc = False
    break
  
  news_img_page = news_img_page_context.read()
  #f = open("e:\\page.html", "w")
  #f.write(news_img_page)
  #f.close()
  reg_str = r'http://image\S+jpg'
  image_reg = re.compile(reg_str)
  image_results = image_reg.findall(news_img_page)
  if len(image_results) == 0:
    print "Cannot find news page" + str(page_index) + "!"
    download_suc = False
    break
  
  image_url = image_results[0]
  print "News image url = " + image_url
  news_image_context = urllib2.urlopen(image_url)
  image_name = image_folder + "page_" + str(page_index) + ".jpg"
  imgf = open(image_name, 'wb')
  print "Getting image..."
  try:
    while True:
      date = news_image_context.read(1024*10)
      if not date:
        break
      imgf.write(date)
    imgf.close()
  except:
    download_suc = False
    print "Save image " + str(page_index) + " failed!"
    print "Unexpected error: " + sys.exc_info()[0] + sys.exc_info()[1]
  else:
    print "Save image " + str(page_index) + " succeed!"
    print
  page_index = page_index + 1
if download_suc == True:
  print "News download succeed! Path = \"" + str(image_folder) + "\""
  print "Enjoy it! ^^"
else:
  print "news download failed!"
                                    标签:
                                        
                            Python,解析,网页,方法
                                免责声明:本站文章均来自网站采集或用户投稿,网站不提供任何软件下载或自行开发的软件!
                                如有用户或公司发现本站内容信息存在侵权行为,请邮件告知! 858582#qq.com
                            
                        暂无“Python通过解析网页实现看报程序的方法”评论...
                                    稳了!魔兽国服回归的3条重磅消息!官宣时间再确认!
昨天有一位朋友在大神群里分享,自己亚服账号被封号之后居然弹出了国服的封号信息对话框。
这里面让他访问的是一个国服的战网网址,com.cn和后面的zh都非常明白地表明这就是国服战网。
而他在复制这个网址并且进行登录之后,确实是网易的网址,也就是我们熟悉的停服之后国服发布的暴雪游戏产品运营到期开放退款的说明。这是一件比较奇怪的事情,因为以前都没有出现这样的情况,现在突然提示跳转到国服战网的网址,是不是说明了简体中文客户端已经开始进行更新了呢?
更新动态
2025年11月04日
                                2025年11月04日
                    - 小骆驼-《草原狼2(蓝光CD)》[原抓WAV+CUE]
 - 群星《欢迎来到我身边 电影原声专辑》[320K/MP3][105.02MB]
 - 群星《欢迎来到我身边 电影原声专辑》[FLAC/分轨][480.9MB]
 - 雷婷《梦里蓝天HQⅡ》 2023头版限量编号低速原抓[WAV+CUE][463M]
 - 群星《2024好听新歌42》AI调整音效【WAV分轨】
 - 王思雨-《思念陪着鸿雁飞》WAV
 - 王思雨《喜马拉雅HQ》头版限量编号[WAV+CUE]
 - 李健《无时无刻》[WAV+CUE][590M]
 - 陈奕迅《酝酿》[WAV分轨][502M]
 - 卓依婷《化蝶》2CD[WAV+CUE][1.1G]
 - 群星《吉他王(黑胶CD)》[WAV+CUE]
 - 齐秦《穿乐(穿越)》[WAV+CUE]
 - 发烧珍品《数位CD音响测试-动向效果(九)》【WAV+CUE】
 - 邝美云《邝美云精装歌集》[DSF][1.6G]
 - 吕方《爱一回伤一回》[WAV+CUE][454M]