tuon 1 year ago
commit
36532af7af
15 changed files with 821 additions and 0 deletions
  1. 127 0
      .gitignore
  2. 15 0
      Dockerfile
  3. 4 0
      Makefile
  4. 114 0
      README.md
  5. 202 0
      _log.py
  6. BIN
      demo/human.jpg
  7. 21 0
      docker-compose.yaml
  8. 2 0
      install.sh
  9. BIN
      output/human.jpg
  10. 15 0
      requirements.txt
  11. 109 0
      task.py
  12. 3 0
      test.sh
  13. 1 0
      tools/__init__.py
  14. 157 0
      tools/image.py
  15. 51 0
      utils/color.py

+ 127 - 0
.gitignore

@@ -0,0 +1,127 @@
+# Mac system
+.DS_Store
+
+# Pycharm
+.idea/
+
+# VSCode
+.vscode/
+
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+
+# C extensions
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+#  Usually these files are written by a python script from a template
+#  before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+.hypothesis/
+.pytest_cache/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+db.sqlite3
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# pyenv
+.python-version
+
+# celery beat schedule file
+celerybeat-schedule
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
+
+# js
+node_modules/
+package-lock.json
+test_tipc/web/models/
+
+# EISeg
+EISeg/eiseg/config/setting.txt
+
+/outputs
+/uploads
+!/uploads/1.txt
+
+/models
+/logs

+ 15 - 0
Dockerfile

@@ -0,0 +1,15 @@
+FROM registry.cn-hangzhou.aliyuncs.com/tuon-pub/python:3.10.4
+
+RUN mkdir /app
+COPY . /app
+
+
+RUN cd /app && pip install --no-cache-dir -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
+
+RUN sed -i 's/deb.debian.org/mirrors.ustc.edu.cn/g' /etc/apt/sources.list
+RUN sed -i 's/security.debian.org/mirrors.ustc.edu.cn/g' /etc/apt/sources.list
+RUN apt update
+RUN apt install -y libgl1-mesa-glx
+
+WORKDIR /app
+CMD ["python","task.py"]

+ 4 - 0
Makefile

@@ -0,0 +1,4 @@
+VERSION=latest
+build:
+	docker build -t registry.cn-hangzhou.aliyuncs.com/tuon-pub/img-processor:$(VERSION) -f ./Dockerfile .
+	docker push registry.cn-hangzhou.aliyuncs.com/tuon-pub/img-processor:$(VERSION)

+ 114 - 0
README.md

@@ -0,0 +1,114 @@
+简体中文 | [English](README.md)
+
+# Image Matting
+
+## 模型下载
+- [通用目标抠图](https://paddleseg.bj.bcebos.com/matting/models/deploy/ppmatting-hrnet_w48-composition.zip)
+- [人物抠图](https://paddleseg.bj.bcebos.com/matting/models/ppmattingv2-stdc1-human_512.pdparams)
+
+人物抠图效果比较好,其它的没有尝试
+
+## 目录
+* [简介](#简介)
+* [更新动态](#更新动态)
+* [技术交流](#技术交流)
+* [模型库](#模型库)
+* [使用教程](#使用教程)
+* [社区贡献](#社区贡献)
+* [学术引用](#学术引用)
+
+
+## 简介
+
+Image Matting(精细化分割/影像去背/抠图)是指借由计算前景的颜色和透明度,将前景从影像中撷取出来的技术,可用于替换背景、影像合成、视觉特效,在电影工业中被广泛地使用。
+影像中的每个像素会有代表其前景透明度的值,称作阿法值(Alpha),一张影像中所有阿法值的集合称作阿法遮罩(Alpha Matte),将影像被遮罩所涵盖的部分取出即可完成前景的分离。
+
+
+<p align="center">
+<img src="https://user-images.githubusercontent.com/30919197/179751613-d26f2261-7bcf-4066-a0a4-4c818e7065f0.gif" width="100%" height="100%">
+</p>
+
+## 更新动态
+* 2022.11
+  * **开源自研轻量级抠图SOTA模型PP-MattingV2**。对比MODNet, PP-MattingV2推理速度提升44.6%, 误差平均相对减小17.91%。
+  * 调整文档结构,完善模型库信息。
+  * [FastDeploy](https://github.com/PaddlePaddle/FastDeploy)部署支持PP-MattingV2, PP-Matting, PP-HumanMatting和MODNet模型。
+* 2022.07
+  * 开源PP-Matting代码;新增ClosedFormMatting、KNNMatting、FastMatting、LearningBaseMatting和RandomWalksMatting传统机器学习算法;新增GCA模型。
+  * 完善目录结构;支持指定指标进行评估。
+* 2022.04
+  * **开源自研高精度抠图SOTA模型PP-Matting**;新增PP-HumanMatting高分辨人像抠图模型。
+  * 新增Grad、Conn评估指标;新增前景评估功能,利用[ML](https://arxiv.org/pdf/2006.14970.pdf)算法在预测和背景替换时进行前景评估。
+  * 新增GradientLoss和LaplacianLoss;新增RandomSharpen、RandomSharpen、RandomReJpeg、RSSN数据增强策略。
+* 2021.11
+  * **Matting项目开源**, 实现图像抠图功能。
+  * 支持Matting模型:DIM, MODNet;支持模型导出及Python部署;支持背景替换功能;支持人像抠图Android部署。
+
+## 技术交流
+
+* 如果大家有使用问题和功能建议, 可以通过[GitHub Issues](https://github.com/PaddlePaddle/PaddleSeg/issues)提issue。
+* **欢迎加入PaddleSeg的微信用户群👫**(扫码填写简单问卷即可入群),大家可以和值班同学、各界大佬直接进行交流,还可以**领取30G重磅学习大礼包🎁**
+  * 🔥 获取深度学习视频教程、图像分割论文合集
+  * 🔥 获取PaddleSeg的历次直播视频,最新发版信息和直播动态
+  * 🔥 获取PaddleSeg自建的人像分割数据集,整理的开源数据集
+  * 🔥 获取PaddleSeg在垂类场景的预训练模型和应用合集,涵盖人像分割、交互式分割等等
+  * 🔥 获取PaddleSeg的全流程产业实操范例,包括质检缺陷分割、抠图Matting、道路分割等等
+<div align="center">
+<img src="https://user-images.githubusercontent.com/30883834/213601179-0813a896-11e1-4514-b612-d145e068ba86.jpeg"  width = "200" />  
+</div>
+
+## 模型库
+
+针对高频应用场景 —— 人像抠图,我们训练并开源了**高质量人像抠图模型库**。根据实际应用场景,大家可以直接部署应用,也支持进行微调训练。
+
+模型库中包括我们自研的高精度PP-Matting模型和轻量级PP-MattingV2模型。
+- PP-Matting是PaddleSeg自研的高精度抠图模型,通过引导流设计实现语义引导下高分辨率图像抠图。追求更高精度,推荐使用该模型。
+    且该模型提供了512和1024两个分辨率级别的预训练模型。
+- PP-MattingV2是PaddleSeg自研的轻量级抠图SOTA模型,通过双层金字塔池化及空间注意力提取高级语义信息,并利用多级特征融合机制兼顾语义和细节的预测。
+    对比MODNet模型推理速度提升44.6%, 误差平均相对减小17.91%。追求更高速度,推荐使用该模型。
+
+
+| 模型 | SAD | MSE | Grad | Conn |Params(M) | FLOPs(G) | FPS | Config File | Checkpoint | Inference Model |
+| -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |
+| PP-MattingV2-512   |40.59|0.0038|33.86|38.90| 8.95 | 7.51 | 98.89 |[cfg](../configs/ppmattingv2/ppmattingv2-stdc1-human_512.yml)| [model](https://paddleseg.bj.bcebos.com/matting/models/ppmattingv2-stdc1-human_512.pdparams) | [model inference](https://paddleseg.bj.bcebos.com/matting/models/deploy/ppmattingv2-stdc1-human_512.zip) |
+| PP-Matting-512     |31.56|0.0022|31.80|30.13| 24.5 | 91.28 | 28.9 |[cfg](../configs/ppmatting/ppmatting-hrnet_w18-human_512.yml)| [model](https://paddleseg.bj.bcebos.com/matting/models/ppmatting-hrnet_w18-human_512.pdparams) | [model inference](https://paddleseg.bj.bcebos.com/matting/models/deploy/ppmatting-hrnet_w18-human_512.zip) |
+| PP-Matting-1024    |66.22|0.0088|32.90|64.80| 24.5 | 91.28 | 13.4(1024X1024) |[cfg](../configs/ppmatting/ppmatting-hrnet_w18-human_1024.yml)| [model](https://paddleseg.bj.bcebos.com/matting/models/ppmatting-hrnet_w18-human_1024.pdparams) | [model inference](https://paddleseg.bj.bcebos.com/matting/models/deploy/ppmatting-hrnet_w18-human_1024.zip) |
+| PP-HumanMatting    |53.15|0.0054|43.75|52.03| 63.9 | 135.8 (2048X2048)| 32.8(2048X2048)|[cfg](../configs/human_matting/human_matting-resnet34_vd.yml)| [model](https://paddleseg.bj.bcebos.com/matting/models/human_matting-resnet34_vd.pdparams) | [model inference](https://paddleseg.bj.bcebos.com/matting/models/deploy/pp-humanmatting-resnet34_vd.zip) |
+| MODNet-MobileNetV2 |50.07|0.0053|35.55|48.37| 6.5 | 15.7 | 68.4 |[cfg](../configs/modnet/modnet-mobilenetv2.yml)| [model](https://paddleseg.bj.bcebos.com/matting/models/modnet-mobilenetv2.pdparams) | [model inference](https://paddleseg.bj.bcebos.com/matting/models/deploy/modnet-mobilenetv2.zip) |
+| MODNet-ResNet50_vd |39.01|0.0038|32.29|37.38| 92.2 | 151.6 | 29.0 |[cfg](../configs/modnet/modnet-resnet50_vd.yml)| [model](https://paddleseg.bj.bcebos.com/matting/models/modnet-resnet50_vd.pdparams) | [model inference](https://paddleseg.bj.bcebos.com/matting/models/deploy/modnet-resnet50_vd.zip) |
+| MODNet-HRNet_W18   |35.55|0.0035|31.73|34.07| 10.2 | 28.5 | 62.6 |[cfg](../configs/modnet/modnet-hrnet_w18.yml)| [model](https://paddleseg.bj.bcebos.com/matting/models/modnet-hrnet_w18.pdparams) | [model inference](https://paddleseg.bj.bcebos.com/matting/models/deploy/modnet-hrnet_w18.zip) |
+| DIM-VGG16          |32.31|0.0233|28.89|31.45| 28.4 | 175.5| 30.4 |[cfg](../configs/dim/dim-vgg16.yml)| [model](https://paddleseg.bj.bcebos.com/matting/models/dim-vgg16.pdparams) | [model inference](https://paddleseg.bj.bcebos.com/matting/models/deploy/dim-vgg16.zip) |
+
+
+**注意**:
+* 指标计算数据集为PPM-100和AIM-500中的人像部分共同组成,共195张,[PPM-AIM-195](https://paddleseg.bj.bcebos.com/matting/datasets/PPM-AIM-195.zip)。
+* FLOPs和FPS计算默认模型输入大小为(512, 512), GPU为Tesla V100 32G。FPS基于Paddle Inference预测库进行计算。
+* DIM为trimap-based的抠图方法,指标只计算过度区域部分,对于没有提供trimap的情况下,默认将0<alpha<255的区域以25像素为半径进行膨胀腐蚀后作为过度区域。
+
+## 使用教程
+* [在线体验](docs/online_demo_cn.md)
+* [快速体验](docs/quick_start_cn.md)
+* [全流程开发](docs/full_develop_cn.md)
+* [人像抠图Android部署](deploy/human_matting_android_demo/README_CN.md)
+* [人像抠图.NET部署](https://gitee.com/raoyutian/PaddleSegSharp)
+* [数据集准备](docs/data_prepare_cn.md)
+* AI Studio第三方教程
+  * [PaddleSeg的Matting教程](https://aistudio.baidu.com/aistudio/projectdetail/3876411?contributionType=1)
+  * [PP-Matting图像抠图教程](https://aistudio.baidu.com/aistudio/projectdetail/5002963?contributionType=1)
+
+## 社区贡献
+* 感谢[钱彬(Qianbin)](https://github.com/qianbin1989228)等开发者的贡献。
+* 感谢Jizhizi Li等提出的[GFM](https://arxiv.org/abs/2010.16188) Matting框架助力PP-Matting的算法研发。
+
+## 学术引用
+```
+@article{chen2022pp,
+  title={PP-Matting: High-Accuracy Natural Image Matting},
+  author={Chen, Guowei and Liu, Yi and Wang, Jian and Peng, Juncai and Hao, Yuying and Chu, Lutao and Tang, Shiyu and Wu, Zewu and Chen, Zeyu and Yu, Zhiliang and others},
+  journal={arXiv preprint arXiv:2204.09433},
+  year={2022}
+}
+```
+
+## 参考文档
+https://gitee.com/paddlepaddle/PaddleSeg/blob/release/2.8/Matting/docs/quick_start_cn.md

+ 202 - 0
_log.py

@@ -0,0 +1,202 @@
+# coding: utf-8
+
+import logging
+import types
+import datetime
+import os
+from collections import namedtuple
+from enum import Enum
+import simplejson
+from pythonjsonlogger import jsonlogger
+
+
+###############################################################################
+
+class ServiceType(Enum):
+    AGENT = 1
+    TASK = 2
+    EXPORT = 3
+
+
+class EventType(Enum):
+    LOG = 1
+    TASK_STARTED = 2
+    TASK_FINISHED = 3
+    TASK_STOPPED = 4
+    TASK_CRASHED = 5
+    STEP_COMPLETE = 6
+    PROGRESS = 7
+    METRICS = 8
+    AGENT_READY = 9
+    TASK_VERIFIED = 10
+    TASK_REJECTED = 11
+    TASK_SUBMITTED = 12
+    TASK_SCHEDULED = 13
+    AGENT_EXITED = 14
+    LOGA = 15
+    STEP_STARTED = 16
+    FILES_UPLOADED = 17
+
+###############################################################################
+# predefined levels
+
+
+# level name: level, default exc_info, description
+LogLevelSpec = namedtuple('LogLevelSpec', [
+    'int',
+    'add_exc_info',
+    'descr',
+])
+
+LOGGING_LEVELS = {
+    'FATAL': LogLevelSpec(50, True, 'Critical error'),
+    'ERROR': LogLevelSpec(40, True, 'Error'),  # may be shown to end user
+    'WARN': LogLevelSpec(30, False, 'Warning'),  # may be shown to end user
+    'INFO': LogLevelSpec(20, False, 'Info'),  # may be shown to end user
+    'DEBUG': LogLevelSpec(10, False, 'Debug'),
+    'TRACE': LogLevelSpec(5, False, 'Trace'),
+}
+
+
+def _set_logging_levels(levels, the_logger):
+    for lvl_name, (lvl, def_exc_info, _) in levels.items():
+        logging.addLevelName(lvl, lvl_name.upper())  # two mappings
+
+        def construct_logger_member(lvl_val, default_exc_info):
+            return lambda self, msg, *args, exc_info=default_exc_info, **kwargs: \
+                self.log(lvl_val,
+                         msg,
+                         *args,
+                         exc_info=exc_info,
+                         **kwargs)
+
+        func = construct_logger_member(lvl, def_exc_info)
+        bound_method = types.MethodType(func, the_logger)
+        setattr(the_logger, lvl_name.lower(), bound_method)
+
+
+###############################################################################
+
+
+def _get_default_logging_fields():
+    supported_keys = [
+        'asctime',
+        # 'created',
+        # 'filename',
+        # 'funcName',
+        'levelname',
+        # 'levelno',
+        # 'lineno',
+        # 'module',
+        # 'msecs',
+        'message',
+        # 'name',
+        # 'pathname',
+        # 'process',
+        # 'processName',
+        # 'relativeCreated',
+        # 'thread',
+        # 'threadName'
+    ]
+    return ' '.join(['%({0:s})'.format(k) for k in supported_keys])
+
+
+def dumps_ignore_nan(obj, *args, **kwargs):
+    return simplejson.dumps(obj, ignore_nan=True, ensure_ascii=False, *args, **kwargs)
+
+
+class CustomJsonFormatter(jsonlogger.JsonFormatter):
+    additional_fields = {}
+
+    def __init__(self, format_string):
+        super().__init__(format_string, json_serializer=dumps_ignore_nan)
+
+    def process_log_record(self, log_record):
+        log_record['timestamp'] = log_record.pop('asctime', None)
+
+        levelname = log_record.pop('levelname', None)
+        if levelname is not None:
+            log_record['level'] = levelname.lower()
+
+        e_info = log_record.pop('exc_info', None)
+        if e_info is not None:
+            if e_info == 'NoneType: None':  # python logger is not ok here
+                pass
+            else:
+                log_record['stack'] = e_info.split('\n')
+
+        return jsonlogger.JsonFormatter.process_log_record(self, log_record)
+
+    def add_fields(self, log_record, record, message_dict):
+        super(CustomJsonFormatter, self).add_fields(log_record, record, message_dict)
+
+        for field, val in CustomJsonFormatter.additional_fields.items():
+            if (val is not None) and (field not in log_record):
+                log_record[field] = val
+
+    def formatTime(self, record, datefmt=None):
+        ct = datetime.datetime.fromtimestamp(record.created)
+        t = ct.strftime('%Y-%m-%dT%H:%M:%S')
+        s = '%s.%03dZ' % (t, record.msecs)
+        return s
+
+
+def _construct_logger(the_logger, loglevel_text):
+    for handler in the_logger.handlers:
+        the_logger.removeHandler(handler)
+
+    _set_logging_levels(LOGGING_LEVELS, the_logger)
+
+    the_logger.setLevel(loglevel_text.upper())
+
+    log_handler = logging.StreamHandler()
+    add_logger_handler(the_logger, log_handler)
+
+    the_logger.propagate = False
+
+
+###############################################################################
+
+
+def add_logger_handler(the_logger, log_handler):  # default format
+    logger_fmt_string = _get_default_logging_fields()
+    formatter = CustomJsonFormatter(logger_fmt_string)
+    log_handler.setFormatter(formatter)
+    the_logger.addHandler(log_handler)
+
+
+def add_default_logging_into_file(the_logger, log_dir):
+    fname = 'log_{}.txt'.format(
+        datetime.datetime.now().strftime("%Y-%m-%d_%H:%M:%S"))
+    ofpath = os.path.join(log_dir, fname)
+
+    log_handler_file = logging.FileHandler(filename=ofpath)
+    add_logger_handler(the_logger, log_handler_file)
+
+
+# runs on all formatters
+def change_formatters_default_values(the_logger, field_name, value):
+    for handler in the_logger.handlers:
+        hfaf = handler.formatter.additional_fields
+        if value is not None:
+            hfaf[field_name] = value
+        else:
+            hfaf.pop(field_name, None)
+
+
+def set_global_logger():
+    loglevel = os.getenv('LOG_LEVEL', 'TRACE')  # use the env to set loglevel
+    the_logger = logging.getLogger('logger')  # optional logger name
+    _construct_logger(the_logger, loglevel)
+    return the_logger
+
+
+def get_task_logger(task_id):
+    loglevel = os.getenv('LOG_LEVEL', 'TRACE')  # use the env to set loglevel
+    logger_name = 'task_{}'.format(task_id)
+    the_logger = logging.getLogger(logger_name)  # optional logger name
+    _construct_logger(the_logger, loglevel)
+    return the_logger
+
+
+logger = set_global_logger()

BIN
demo/human.jpg


+ 21 - 0
docker-compose.yaml

@@ -0,0 +1,21 @@
+version: "3"
+
+services:
+  matting:
+    image: registry.cn-hangzhou.aliyuncs.com/tuon-pub/img-processor
+    container_name: matting
+    ports:
+      - 20201:20201
+    volumes:
+      - ./uploads:/app/uploads
+      - ./outputs:/app/outputs
+      - ./models:/app/models
+    restart: always
+    deploy:
+      resources:
+        limits:
+          cpus: '1'
+          memory: 2G
+        reservations:
+          cpus: '0.5'
+          memory: 512M

+ 2 - 0
install.sh

@@ -0,0 +1,2 @@
+#!/bin/sh
+python -m pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple

BIN
output/human.jpg


+ 15 - 0
requirements.txt

@@ -0,0 +1,15 @@
+scikit-image
+numba
+opencv-python~=4.5.5.64
+
+numpy
+pyyaml
+tqdm
+flask~=2.0.3
+six
+scipy
+pillow
+werkzeug==2.0.2
+wtforms
+simplejson
+python-json-logger==0.1.8

+ 109 - 0
task.py

@@ -0,0 +1,109 @@
+import os
+import time
+import json
+import base64
+import traceback
+from _log import logger, EventType
+import tools
+import shutil
+import cv2
+import copy
+
+
+def is_image(path: str):
+    img = cv2.imread(path)
+    if img is None:
+        return False
+    return True
+
+
+class Interface:
+    def __init__(self, interface=None):
+        if interface is None:
+            interface = os.getenv('PARAMS')
+        if interface is None:
+            interface = '{"width":512}'
+
+        self.interface = self.decode_b64(interface)
+
+        self.input = self.interface.get('input', '/input')
+        if not os.path.exists(self.input):
+            raise FileNotFoundError('The input file does not exist.')
+
+        self.result = self.interface.get('result', '/result')
+        if self.result is None:
+            os.makedirs(str(self.result))
+
+        self.params = tools.ProcessorParams(self.interface.get('width'),
+                                            self.interface.get('height'),
+                                            self.interface.get('rotate'),
+                                            self.interface.get('flip'),
+                                            self.interface.get('rect'),
+                                            self.result,
+                                            None)
+
+        self.input_dic = []
+
+    @staticmethod
+    def decode_b64(interface):
+        try:
+            interface = base64.b64decode(interface.encode('utf-8')).decode("utf-8")
+        except:
+            interface = interface.encode('utf-8')
+
+        logger.info("PREPARE", extra={'event_type': EventType.PROGRESS, 'params': interface})
+
+        interface = json.loads(interface)
+
+        assert isinstance(interface, dict), 'The interface must be a dictionary.'
+
+        return interface
+
+    @staticmethod
+    def encode_b64(string):
+        return base64.b64encode(string.encode('utf-8')).decode("utf-8")
+
+    def scan_files(self, path: str):
+        if not os.path.isdir(path):
+            self.input_dic.append(path)
+            return
+        files = os.listdir(path)
+        for file in files:
+            self.scan_files(os.path.join(path, file))
+
+    def run(self):
+        logger.info("files scanning.", extra={'event_type': EventType.STEP_COMPLETE})
+        self.scan_files(self.input)
+        files_in_total = len(self.input_dic)
+        logger.info("files-in-total",
+                    extra={'event_type': EventType.METRICS, 'desc': '处理文件总个数', 'value': files_in_total})
+        count = 0
+        for path in self.input_dic:
+            count += 1
+            logger.info("PROGRESS", extra={'event_type': EventType.PROGRESS, 'total': files_in_total, 'current': count})
+            try:
+                params = copy.deepcopy(self.params)
+                params.set_path(path)
+                if not tools.processor(params):
+                    real_path = os.path.relpath(path, self.input)
+                    shutil.copy(path, os.path.join(self.result, real_path))
+
+            except Exception as e:
+                logger.info(e, extra={'event_type': EventType.TASK_CRASHED, 'traceback': traceback.format_exc()})
+
+
+def start(interface=None):
+    logger.info('TASK_STARTED', extra={'event_type': EventType.TASK_STARTED})
+    t = time.time()
+    try:
+        Interface(interface=interface).run()
+    except Exception as e:
+        logger.info(e, extra={'event_type': EventType.TASK_CRASHED, 'traceback': traceback.format_exc()})
+    logger.info("TASK_FINISHED", extra={'event_type': EventType.TASK_FINISHED})
+    logger.info("TASK_FINISHED")
+    elapsed = time.time() - t
+    logger.info("time-elapsed", extra={'event_type': EventType.METRICS, 'desc': '总时长', 'value': elapsed})
+
+
+if __name__ == '__main__':
+    start()

+ 3 - 0
test.sh

@@ -0,0 +1,3 @@
+#/bin/sh
+export PARAMS='{"input":"./demo"}'
+python task.py

+ 1 - 0
tools/__init__.py

@@ -0,0 +1 @@
+from .image import processor, ProcessorParams

+ 157 - 0
tools/image.py

@@ -0,0 +1,157 @@
+import os
+import numpy as np
+import cv2
+
+
+def is_empty(s: str):
+    return s is None or len(str(s)) == 0
+
+
+def to_int(s: str):
+    if isinstance(s, (int, float)):
+        return s
+    if is_empty(s):
+        return None
+    return int(str(s))
+
+
+def to_size_int(s: str):
+    v = to_int(s)
+    if v is None:
+        return v
+    if v < 1:
+        return None
+    return v
+
+
+class NotImageError(Exception):
+    def __init__(self):
+        super().__init__("File Is Not Image")
+
+
+class ProcessorParams:
+    def __init__(self, width, height, rotate, flip, rect, out_dir, path):
+        self.width = width
+        self.height = height
+        self.rotate = rotate
+        # 翻转, 0 为沿X轴翻转,正数为沿Y轴翻转,负数为同时沿X轴和Y轴翻转
+        self.flip = flip
+        # 裁剪,left,top,right,bottom 四个参数
+        self.rect = rect
+        # 保存的文件夹
+        self.out_dir = out_dir
+        self.path = path
+
+    def set_path(self, path):
+        self.path = path
+
+
+def processor(params: ProcessorParams):
+    file_path = params.path
+
+    if os.path.exists(file_path) is not True:
+        raise FileNotFoundError('Need to give the source path.')
+
+    if not os.path.exists(params.out_dir):
+        os.makedirs(params.out_dir)
+
+    img = cv2.imread(file_path)
+    if img is None:
+        raise NotImageError()
+
+    origin_h = img.shape[0]
+    origin_w = img.shape[1]
+
+    dst, r_w, r_h = crop(img, params.rect, origin_w, origin_h)
+
+    width = to_size_int(params.width)
+    height = to_size_int(params.height)
+    change_size = True
+    if width is None and height is None:
+        change_size = False
+
+    if change_size:
+        if width is None:
+            h = height
+            w = round(h * origin_w / origin_h)
+        elif height is None:
+            w = width
+            h = round(w * origin_h / origin_w)
+        else:
+            w = width
+            h = height
+        origin = img if dst is None else dst
+        dst = cv2.resize(origin, (w, h))
+        r_w = w
+        r_h = h
+
+    if r_w is None:
+        r_w = origin_w
+    if r_h is None:
+        r_h = origin_h
+
+    flip = params.flip
+    if is_empty(flip) is not True:
+        flip = to_int(flip)
+        origin = img if dst is None else dst
+        dst = cv2.flip(origin, flip)
+
+    rotate = params.rotate
+    if is_empty(rotate) is False and to_int(rotate) != 0:
+        origin = img if dst is None else dst
+        dst, new_w, new_h = rot_degree(origin, float(rotate), w=r_w, h=r_h)
+
+    origin_filename = os.path.basename(file_path)
+    out_file = os.path.join(params.out_dir, origin_filename)
+
+    if dst is not None:
+        cv2.imwrite(out_file, dst)
+        return True
+    return False
+
+
+def crop(img, rect: str, w, h):
+    # 裁剪,
+    # 如果没有改变,返回None
+    if is_empty(rect):
+        return None, None, None
+    r = rect.split(',')
+    if len(r) != 4:
+        return None, None, None
+
+    left = int(r[0])
+    top = int(r[1])
+    right = int(r[2])
+    bottom = int(r[3])
+
+    if left < 0:
+        left = 0
+    if right > w:
+        right = w
+    if top < 0:
+        top = 0
+    if bottom > h:
+        bottom = h
+
+    if left == 0 and top == 0 and right == w and bottom == h:
+        return None, None, None
+
+    return img[top:bottom, left:right], right - left, bottom - top
+
+
+def rot_degree(img, degree, w, h):
+    center = (w / 2, h / 2)
+
+    M = cv2.getRotationMatrix2D(center, degree, 1)
+    top_right = np.array((w - 1, 0)) - np.array(center)
+    bottom_right = np.array((w - 1, h - 1)) - np.array(center)
+    top_right_after_rot = M[0:2, 0:2].dot(top_right)
+    bottom_right_after_rot = M[0:2, 0:2].dot(bottom_right)
+    new_width = max(int(abs(bottom_right_after_rot[0] * 2) + 0.5), int(abs(top_right_after_rot[0] * 2) + 0.5))
+    new_height = max(int(abs(top_right_after_rot[1] * 2) + 0.5), int(abs(bottom_right_after_rot[1] * 2) + 0.5))
+    offset_x = (new_width - w) / 2
+    offset_y = (new_height - h) / 2
+    M[0, 2] += offset_x
+    M[1, 2] += offset_y
+    dst = cv2.warpAffine(img, M, (new_width, new_height))
+    return dst, new_width, new_height

+ 51 - 0
utils/color.py

@@ -0,0 +1,51 @@
+import os
+import cv2
+import numpy as np
+
+
+def get_bg(background, img_shape):
+    # 1、纯色
+    # 2、通道颜色
+    # 3、图片
+    bg = np.zeros(img_shape)
+    if background is None:
+        return None
+    if os.path.exists(background):
+        bg = cv2.imread(background)
+        bg = cv2.resize(bg, (img_shape[1], img_shape[0]))
+    elif background == 'r':
+        bg[:, :, 2] = 255
+    elif background == 'g':
+        bg[:, :, 1] = 255
+    elif background == 'b':
+        bg[:, :, 0] = 255
+    elif background == 'w':
+        bg[:, :, :] = 255
+    elif is_color_hex(background):
+        r, g, b, _ = hex_to_rgb(background)
+        bg[:, :, 2] = r
+        bg[:, :, 1] = g
+        bg[:, :, 0] = b
+    else:
+        return None
+
+    return bg
+
+
+def is_color_hex(color: str):
+    size = len(color)
+    if color.startswith("#"):
+        return size == 7 or size == 9
+    return False
+
+
+def hex_to_rgb(color: str):
+    if color.startswith("#"):
+        color = color[1:len(color)]
+    r = int(color[0:2], 16)
+    g = int(color[2:4], 16)
+    b = int(color[4:6], 16)
+    a = 100
+    if len(color) == 8:
+        a = int(color[6:8], 16)
+    return r, g, b, a