一覧に戻る

タイトル
  • en A study on a low power optimization algorithm for an edge-AI device
作成者
    • en Kaneko, Tatsuya
    • en Orimo, Kentaro
    • en Hida, Itaru
    • en Takamaeda, Shinya
    • NRID 1000060738897
    • 別名 en Yamazaki, Shinya
    • en Motomura, Masato
アクセス権 metadata only access
権利情報
  • en Copyright ©2019 The Institute of Electronics, Information and Communication Engineers
  • https://search.ieice.org/
  • https://search.ieice.org/
主題
  • Other en machine learning
  • Other en edge AI
  • Other en training algorithm
  • Other en backpropagation
  • Other en quantization
  • Other en low power
  • NDC 547
内容注記
  • Abstract en Although research on the inference phase of edge artificial intelligence (AI) has made considerable improvement, the required training phase remains an unsolved problem. Neural network (NN) processing has two phases: inference and training. In the training phase, a NN incurs high calculation cost. The number of bits (bitwidth) in the training phase is several orders of magnitude larger than that in the inference phase. Training algorithms, optimized to software, are not appropriate for training hardware-oriented NNs. Therefore, we propose a new training algorithm for edge AI: backpropagation (BP) using a ternarized gradient. This ternarized backpropagation (TBP) provides a balance between calculation cost and performance. Empirical results demonstrate that in a two-class classification task, TBP works well in practice and compares favorably with 16-bit BP (Fixed-BP).
出版者 en 電子情報通信学会(The Institute of Electronics, Information and Communication Engineers / IEICE)
日付
    Issued2019-10
言語
  • eng
資源タイプ journal article
出版タイプ NA
資源識別子 HDL http://hdl.handle.net/2115/76016
関連
  • isIdenticalTo DOI https://doi.org/10.1587/nolta.10.373
収録誌情報
    • PISSN 2185-4106
      • en Nonlinear theory and its applications, IEICE
      • 10 4 開始ページ373 終了ページ389
コンテンツ更新日時 2023-07-26