簡 歷:
2018年06月 — 至今:中國科學(xué)院計(jì)算技術(shù)研究所,副研究員
主要論著:
期刊文章:
[1] Cheng Liu, Cheng Chu, Dawen Xu, Ying Wang, Qianlong Wang, Huawei Li, Xiaowei Li, Kwang-Ting Cheng, "HyCA: A Hybrid Computing Architecture for Fault Tolerant Deep Learning", IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), 2021
[2] Dawen Xu, Meng He, Cheng Liu*, Ying Wang, Long Cheng, Huawei Li, Xiaowei Li, Kwang-Ting Cheng, "R2F: A Remote Retraining Framework for AIoT Processors with Computing Errors", IEEE Transactions on Very Large-Scale Integration (VLSI) Systems, 2021
[3] Dawen Xu, Ziyang Zhu, Cheng Liu*, Ying Wang, Shuang Zhao, Lei Zhang, Huaguo Liang, Huawei Li, Kwang-Ting Cheng, "Reliability Evaluation and Analysis of FPGA-based Neural Network Acceleration System", IEEE Transactions on Very Large-Scale Integration (VLSI) Systems, 2021
[4] Dawen Xu#, Cheng Liu#, Ying Wang, Kaijie Tu, Huawei Li, Bingsheng He, Lei Zhang, "Accelerating Generative Neural Networks on Unmodified Deep Learning Processors-A Software Approach,” in IEEE Transactions on Computers (TC), 2020
[5] Shengwen Liang, Ying Wang, Cheng Liu, Lei He, Huawei Li, Dawen Xu, Xiaowei Li. "EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks", IEEE Transactions on Computers (TC), 2020. (Featured Paper of the Month)
[6] Chuangyi Gui, Long Zheng, Bingsheng He, Cheng Liu, Xinyu Chen, Xiaofei Liao, and Hai Jin. A Survey on Graph Processing Accelerators: Challenges and Opportunities. Journal of Computer Science and Technology (JCST), 2019
[7] Ying Wang, Yin-He Han, Lei Zhang, Bin-Zhang Fu, Cheng Liu, Hua-Wei Li, and Xiaowei Li. Economizing TSV resources in 3-D network-on-chip design. IEEE Transactions on Very Large-Scale Integration (VLSI) Systems, no. 3(2015):493-506.
[8] Yin-He Han, Cheng Liu, Hang Lu, Wen-Bo Li, Lei Zhang, and Xiao-Wei Li. RevivePath: Resilient network-on-chip design through data path salvaging of router. Journal of Computer Science and Technology (JCST), 2013
?
會議文章:
[1] Cangyuan Li, Ying Wang*, Cheng Liu*, Shengwen Liang, Huawei Li, Xiaowei Li, "GLIST: Towards In-Storage Graph Learning", USENIX Annual Technical Conference (ATC), 2021
[2] Xiaohan Ma, Chang Si, Ying Wang, Cheng Liu, Lei Zhang, "Accelerating Neural Network Design with a NAS Processor", In The 48th IEEE/ACM International Symposium on Computer Architecture (ISCA), 2021
[3] Lei He#, Cheng Liu#, Ying Wang, Shengwen Liang, Huawei Li, Xiaowei Li, “GCiM: A Near-Data Processing Accelerator for Graph Construction”,In IEEE/ACM Proceedings of Design, Automation Conference (DAC), 2021
[4] Yuquan He, Ying Wang*, Cheng Liu*, Lei Zhang, "PicoVO: A Lightweight RGB-D Visual Odometry Targeting Resource-Constrained IoT Devices", In The 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021
[5] Mengdi Wang, Bing Li, Ying Wang, Cheng Liu, Lei Zhang, “MT-DLA: An Efficient Multi-Task Deep Learning Accelerator Design,” in IEEE GLVLSI, 2021.(Best Paper Award)
[6] Xiandong Zhao, Ying Wang, Cheng Liu, Cong Shi, Lei Zhang, “BitPruner: Network Pruning for Bit-Serial Accelerators”, In IEEE/ACM Proceedings of Design, Automation Conference (DAC), 2020
[7] Dawen Xu, Cheng Chu, Qianlong Wang, Cheng Liu*, Ying Wang, Lei Zhang, Huaguo Liang and Kwang-Ting Tim Cheng, “A Hybrid Computing Architecture for Fault-tolerant Deep Learning Accelerators”, The 38th IEEE International Conference on Computer Design (ICCD), October, 2020
[8] Shengwen Liang#, Cheng Liu#, Ying Wang, Huawei Li, Xiaowei Li, DeepBurning-GL: an Automated Framework for Generating Graph Neural Network Accelerators, IEEE/ACM International Conference on Computer-Aided Design (ICCAD), November, 2020
[9] Cheng Liu, Xinyu Chen, Bingsheng He, Ying Wang, Xiaofei Liao, Lei Zhang, “OBFS: OpenCL Based BFS Optimization on Software Programmable FPGAs”, In 2019 International Conference on Field Programmable Technology (FPT), Dec 11-13, 2019
[10] Shengwen Liang, Ying Wang, Cheng Liu, Huawei Li and Xiaowei Li, “InS-DLA: An In-SSD Deep Learning Accelerator for Near-Data Processing”, The International Conference on Field-Programmable Logic and Applications (FPL), Sep 9-11, 2019
[11] Dawen Xu, Kaijie Tu, Ying Wang, Cheng Liu, Bingsheng He, and Huawei Li. “FCN-engine: accelerating deconvolutional layers in classic CNN processors”. In Proceedings of the International Conference on Computer-Aided Design (ICCAD), p.22. ACM, 2018
[12] Cheng Liu, Ho-Cheung Ng, and Hayden Kwok-Hay So. “QuickDough: a rapid FPGA loop accelerator design framework using soft CGRA overlay”. In 2015 International Conference on Field Programmable Technology (FPT), pp. 56-63. IEEE, 2015.
科研項(xiàng)目:
[1] 國家自然科學(xué)基金面上項(xiàng)目,面向深度學(xué)習(xí)處理器的彈性容錯(cuò)技術(shù)研究,2022/1-2025/12,項(xiàng)目負(fù)責(zé)人?
[2] 國家自然科學(xué)青年基金項(xiàng)目,基于FPGA的專用高能效圖計(jì)算加速研究,2020/1-2022/12,項(xiàng)目負(fù)責(zé)人?
[3] 計(jì)算機(jī)體系結(jié)構(gòu)國家重點(diǎn)實(shí)驗(yàn)室重點(diǎn)支持課題,容錯(cuò)深度學(xué)習(xí)處理器的自動(dòng)化設(shè)計(jì), 2021/6-2022/12, 項(xiàng)目負(fù)責(zé)人?
[4]?中國科學(xué)院,STS計(jì)劃項(xiàng)目,超微智能計(jì)算機(jī),2019/1-2019/12,主要參與人?
[5] 中國科學(xué)院,先導(dǎo)C子項(xiàng)目,開源智能物端處理器,2020/1-2021/12,主要參與人?
[6] 華為,基于智能網(wǎng)卡與智能存儲設(shè)備的流式計(jì)算系統(tǒng)研究,2020/6-2021/12,主要參與人?
劉成 副研究員
研究方向:
所屬部門:處理器芯片重點(diǎn)實(shí)驗(yàn)室
導(dǎo)師類別:碩導(dǎo)計(jì)算機(jī)系統(tǒng)結(jié)構(gòu)
聯(lián)系方式:liucheng@ict.ac.cn
個(gè)人網(wǎng)頁:https://liu-cheng.github.io/