URL请求数-MRJob-Python数据分析(6)

HH pythonURL请求数-MRJob-Python数据分析(6)已关闭评论9,3392字数 1673阅读5分34秒阅读模式

1.1. 前言

我们计算URL请求数经过这四个步骤:

Mapper: 将以行数据解析成 key=请求的URL数 value=1的形式文章源自运维生存时间-https://www.ttlsa.com/python/python-big-data-analysis-url-req-count-mrjob/

Shuffle: 通过Shuffle后的结果会生成以 key 的值排序的 value迭代器文章源自运维生存时间-https://www.ttlsa.com/python/python-big-data-analysis-url-req-count-mrjob/

结果如: 请求的URL数 [1, 1, 1 ... 1, 1]文章源自运维生存时间-https://www.ttlsa.com/python/python-big-data-analysis-url-req-count-mrjob/

Reduce 1: 在这边我们计算出 每个请求的URL 的访问量文章源自运维生存时间-https://www.ttlsa.com/python/python-big-data-analysis-url-req-count-mrjob/

输出如: None [sum([1, 1, 1 ... 1, 1]), key]文章源自运维生存时间-https://www.ttlsa.com/python/python-big-data-analysis-url-req-count-mrjob/

Reduce 2: 对sum([1, 1, 1 ... 1, 1]) 进行排序并输出 TOP 100文章源自运维生存时间-https://www.ttlsa.com/python/python-big-data-analysis-url-req-count-mrjob/

输入如: 246361 "/wp-admin/admin-ajax.php"文章源自运维生存时间-https://www.ttlsa.com/python/python-big-data-analysis-url-req-count-mrjob/

1.2. 代码

cat mr_url_req.py
# -*- coding: utf-8 -*-
 
from mrjob.job import MRJob
from mrjob.step import MRStep
from ng_line_parser import NgLineParser
 
import heapq
 
class MRUrlRef(MRJob):
 
    ng_line_parser = NgLineParser()
 
    def mapper(self, _, line):
        self.ng_line_parser.parse(line)
        yield self.ng_line_parser.reference_url, 1 # 外链域名
 
    def reducer_sum(self, key, values):
        """统计 VU"""
        yield None, [sum(values), key]
 
    def reducer_top100(self, _, values):
        """访问数降序"""
        for cnt, path in heapq.nlargest(100, values):
            yield cnt, path
 
    def steps(self):
        return [
            MRStep(mapper=self.mapper,
                   reducer=self.reducer_sum),
            MRStep(reducer=self.reducer_top100)
        ]
 
def main():
    MRUrlRef.run()
 
if __name__ == '__main__':
    main()

运行统计和输出结果文章源自运维生存时间-https://www.ttlsa.com/python/python-big-data-analysis-url-req-count-mrjob/

python mr_url_req.py < www.ttmark.com.access.log
 
No configs found; falling back on auto-configuration
Creating temp directory /tmp/mr_url_req.root.20160924.133027.483660
Running step 1 of 2...
reading from STDIN
Streaming final output from /tmp/mr_url_req.root.20160924.133027.483660/output...
246361  "/wp-admin/admin-ajax.php"
126012  "/tag/"
57325   "/"
......
778     "/meirong/2016/03/15/8442.html"
776     "/jiaju/2015/05/30/6058.html"
773     "/jiaju/2015/05/15/5747.html"
Removing temp directory /tmp/mr_url_req.root.20160924.133027.483660...

值得一提的是我们在这边使用了 heapq.nlargest 函数来计算 TOP 100 的URL请求数,或许多的程序员估计会使用 sorted([1, 1, 1 ... 1, 1])[:100] 这中形式来做。个人建议在数据量大的时候千万别这么高这样内存会被吃尽并报OOM错误。主要是因为 sorted 先生成了一个list 再对list进行切片。如果list数据量大就有问题。文章源自运维生存时间-https://www.ttlsa.com/python/python-big-data-analysis-url-req-count-mrjob/

昵称: HH文章源自运维生存时间-https://www.ttlsa.com/python/python-big-data-analysis-url-req-count-mrjob/

QQ: 275258836文章源自运维生存时间-https://www.ttlsa.com/python/python-big-data-analysis-url-req-count-mrjob/

ttlsa群交流沟通(QQ群②: 6690706 QQ群③: 168085569 QQ群④: 415230207(新) 微信公众号: ttlsacom)文章源自运维生存时间-https://www.ttlsa.com/python/python-big-data-analysis-url-req-count-mrjob/

感觉本文内容不错,读后有收获?文章源自运维生存时间-https://www.ttlsa.com/python/python-big-data-analysis-url-req-count-mrjob/

逛逛衣服店,鼓励作者写出更好文章。文章源自运维生存时间-https://www.ttlsa.com/python/python-big-data-analysis-url-req-count-mrjob/ 文章源自运维生存时间-https://www.ttlsa.com/python/python-big-data-analysis-url-req-count-mrjob/

weinxin
我的微信
微信公众号
扫一扫关注运维生存时间公众号,获取最新技术文章~
HH
  • 本文由 发表于 31/10/2016 00:58:12
  • 转载请务必保留本文链接:https://www.ttlsa.com/python/python-big-data-analysis-url-req-count-mrjob/