瀏覽代碼

将duer_os代码移动到example中,新增catkin_ws的ros工作区,这里实现ros中的语音控制系统

corvin 5 年之前
父節點
當前提交
d9679b33f7
共有 100 個文件被更改,包括 4958 次插入0 次删除
  1. 1 0
      catkin_ws/.catkin_workspace
  2. 1 0
      catkin_ws/src/CMakeLists.txt
  3. 27 0
      catkin_ws/src/snowboy_wakeup/.gitignore
  4. 3 0
      catkin_ws/src/snowboy_wakeup/.gitmodules
  5. 17 0
      catkin_ws/src/snowboy_wakeup/.travis.yml
  6. 30 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/.gitignore
  7. 22 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/.npmignore
  8. 90 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/.travis.yml
  9. 206 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/LICENSE
  10. 12 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/MANIFEST.in
  11. 436 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/README.md
  12. 422 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/README_ZH_CN.md
  13. 134 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/README_commercial.md
  14. 75 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/binding.gyp
  15. 236 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C++/demo.cc
  16. 50 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C++/demo.mk
  17. 146 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C++/demo2.cc
  18. 36 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C++/install_portaudio.sh
  19. 11 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C++/patches/portaudio.patch
  20. 1 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C++/resources
  21. 221 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C/demo.c
  22. 58 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C/demo.mk
  23. 36 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C/install_portaudio.sh
  24. 11 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C/patches/portaudio.patch
  25. 1 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C/resources
  26. 82 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C/snowboy-detect-c-wrapper.cc
  27. 0 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C/snowboy-detect-c-wrapper.h
  28. 0 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/__init__.py
  29. 35 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/demo.py
  30. 41 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/demo2.py
  31. 40 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/demo3.py
  32. 76 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/demo4.py
  33. 35 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/demo_arecord.py
  34. 47 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/demo_threaded.py
  35. 1 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/requirements.txt
  36. 1 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/resources
  37. 248 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/snowboydecoder.py
  38. 181 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/snowboydecoder_arecord.py
  39. 96 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/snowboythreaded.py
  40. 35 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python3/demo.py
  41. 41 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python3/demo2.py
  42. 40 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python3/demo3.py
  43. 75 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python3/demo4.py
  44. 1 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python3/requirements.txt
  45. 1 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python3/resources
  46. 253 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python3/snowboydecoder.py
  47. 52 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/REST_API/training_service.py
  48. 39 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/REST_API/training_service.sh
  49. 220 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/include/snowboy-detect.h
  50. 二進制
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/lib/libsnowboy-detect.a
  51. 43 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/package.json
  52. 0 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/resources/common.res
  53. 二進制
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/resources/models/jarvis.umdl
  54. 二進制
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/resources/models/smart_mirror.umdl
  55. 0 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/resources/models/snowboy.umdl
  56. 二進制
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/resources/snowboy.raw
  57. 23 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/scripts/publish-node.sh
  58. 61 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/setup.py
  59. 24 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/swig/Python/snowboy-detect-swig.i
  60. 24 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/swig/Python3/snowboy-detect-swig.i
  61. 34 0
      catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/tsconfig.json
  62. 100 0
      catkin_ws/src/snowboy_wakeup/CMakeLists.txt
  63. 11 0
      catkin_ws/src/snowboy_wakeup/cfg/SnowboyReconfigure.cfg
  64. 419 0
      catkin_ws/src/snowboy_wakeup/cmake_modules/FindBLAS.cmake
  65. 49 0
      catkin_ws/src/snowboy_wakeup/include/hotword_detector.h
  66. 23 0
      catkin_ws/src/snowboy_wakeup/launch/snowboy_wakeup.launch
  67. 16 0
      catkin_ws/src/snowboy_wakeup/package.xml
  68. 二進制
      catkin_ws/src/snowboy_wakeup/resources/common.res
  69. 二進制
      catkin_ws/src/snowboy_wakeup/resources/corvin.pmdl
  70. 二進制
      catkin_ws/src/snowboy_wakeup/resources/ding.wav
  71. 二進制
      catkin_ws/src/snowboy_wakeup/resources/dong.wav
  72. 二進制
      catkin_ws/src/snowboy_wakeup/resources/snowboy.umdl
  73. 59 0
      catkin_ws/src/snowboy_wakeup/src/hotword_detector.cpp
  74. 149 0
      catkin_ws/src/snowboy_wakeup/src/hotword_detector_node.cpp
  75. 0 0
      example/duer_os/Makefile
  76. 0 0
      example/duer_os/comm.mk
  77. 0 0
      example/duer_os/include/libduer-device/include/baidu_json.h
  78. 0 0
      example/duer_os/include/libduer-device/include/device_vad.h
  79. 0 0
      example/duer_os/include/libduer-device/include/lightduer_adapter.h
  80. 0 0
      example/duer_os/include/libduer-device/include/lightduer_aes.h
  81. 0 0
      example/duer_os/include/libduer-device/include/lightduer_bind_device.h
  82. 0 0
      example/duer_os/include/libduer-device/include/lightduer_bitmap.h
  83. 0 0
      example/duer_os/include/libduer-device/include/lightduer_ca.h
  84. 0 0
      example/duer_os/include/libduer-device/include/lightduer_ca_conf.h
  85. 0 0
      example/duer_os/include/libduer-device/include/lightduer_coap.h
  86. 0 0
      example/duer_os/include/libduer-device/include/lightduer_coap_defs.h
  87. 0 0
      example/duer_os/include/libduer-device/include/lightduer_coap_ep.h
  88. 0 0
      example/duer_os/include/libduer-device/include/lightduer_coap_trace.h
  89. 0 0
      example/duer_os/include/libduer-device/include/lightduer_connagent.h
  90. 0 0
      example/duer_os/include/libduer-device/include/lightduer_data_cache.h
  91. 0 0
      example/duer_os/include/libduer-device/include/lightduer_dcs.h
  92. 0 0
      example/duer_os/include/libduer-device/include/lightduer_dcs_alert.h
  93. 0 0
      example/duer_os/include/libduer-device/include/lightduer_dcs_local.h
  94. 0 0
      example/duer_os/include/libduer-device/include/lightduer_dcs_router.h
  95. 0 0
      example/duer_os/include/libduer-device/include/lightduer_debug.h
  96. 0 0
      example/duer_os/include/libduer-device/include/lightduer_dev_info.h
  97. 0 0
      example/duer_os/include/libduer-device/include/lightduer_ds_log.h
  98. 0 0
      example/duer_os/include/libduer-device/include/lightduer_ds_log_audio.h
  99. 0 0
      example/duer_os/include/libduer-device/include/lightduer_ds_log_audio_player.h
  100. 0 0
      example/duer_os/include/libduer-device/include/lightduer_ds_log_bind.h

+ 1 - 0
catkin_ws/.catkin_workspace

@@ -0,0 +1 @@
+# This file currently only serves to mark the location of a catkin workspace for tool integration

+ 1 - 0
catkin_ws/src/CMakeLists.txt

@@ -0,0 +1 @@
+/opt/ros/kinetic/share/catkin/cmake/toplevel.cmake

+ 27 - 0
catkin_ws/src/snowboy_wakeup/.gitignore

@@ -0,0 +1,27 @@
+# Compiled Object files
+*.slo
+*.lo
+*.o
+*.obj
+
+# Precompiled Headers
+*.gch
+*.pch
+
+# Compiled Dynamic libraries
+*.dylib
+*.dll
+
+# Fortran module files
+*.mod
+*.smod
+
+# Compiled Static libraries
+*.lai
+*.la
+*.lib
+
+# Executables
+*.exe
+*.out
+*.app

+ 3 - 0
catkin_ws/src/snowboy_wakeup/.gitmodules

@@ -0,0 +1,3 @@
+[submodule "3rdparty/snowboy"]
+	path = 3rdparty/snowboy
+	url = https://github.com/Kitt-AI/snowboy.git

+ 17 - 0
catkin_ws/src/snowboy_wakeup/.travis.yml

@@ -0,0 +1,17 @@
+sudo: true
+
+language: cpp
+
+services:
+  - docker
+
+before_install:
+  - wget https://raw.githubusercontent.com/tue-robotics/tue-env/master/ci/install-package.sh
+  - wget https://raw.githubusercontent.com/tue-robotics/tue-env/master/ci/build-package.sh
+  - export PACKAGE=${TRAVIS_REPO_SLUG#*/}
+
+install:
+  - bash install-package.sh --package=$PACKAGE --branch=$TRAVIS_BRANCH --commit=$TRAVIS_COMMIT --pullrequest=$TRAVIS_PULL_REQUEST
+
+script: 
+  - bash build-package.sh --package=$PACKAGE

+ 30 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/.gitignore

@@ -0,0 +1,30 @@
+snowboy-detect-swig.cc
+snowboydetect.py
+Snowboy.pm
+.DS_Store
+
+*.dylib
+*.pyc
+*.o
+*.so
+*.swp
+*.swo
+
+/examples/C/pa_stable_v19_20140130.tgz
+/examples/C/pa_stable_v190600_20161030.tgz
+/examples/C/portaudio
+/examples/C/demo
+/examples/C++/pa_stable_v19_20140130.tgz
+/examples/C++/pa_stable_v190600_20161030.tgz
+/examples/C++/portaudio
+/examples/C++/demo
+/examples/C++/demo2
+
+/build
+/node_modules
+/lib/node/binding
+/lib/node/index.js
+
+/dist
+**/snowboy.egg-info
+/.idea

+ 22 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/.npmignore

@@ -0,0 +1,22 @@
+/lib/libsnowboy-detect.a
+snowboy-detect-swig.cc
+snowboydetect.py
+.DS_Store
+
+*.pyc
+*.o
+*.so
+
+/examples/C++/*
+/examples/Python/*
+
+/swig/Android/*
+/swig/Python/*
+
+/build
+/node_modules
+
+/lib/node/*.ts
+
+.npmignore
+.travis.yml

+ 90 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/.travis.yml

@@ -0,0 +1,90 @@
+language: cpp
+
+# Cache node dependencies
+cache:
+  directories:
+    - node_modules
+
+# Ubuntu 14.04 Trusty support
+sudo: required
+dist: trusty
+
+addons:
+  apt:
+    sources:
+    # add PPAs with more up-to-date toolchains
+    - ubuntu-toolchain-r-test
+    - llvm-toolchain-precise-3.9
+    packages:
+    # install toolchains
+    - libmagic-dev
+    - libatlas-base-dev
+    - gcc-5
+    - g++-5
+    - clang-3.8
+
+os:
+- linux
+- osx
+
+env:
+  global:
+    - secure: Hpft/SbwPrjQbHq+3DeJ8aMCpg2uW4z9MY4XaPPA5FQ80QkUdFMqALRvdBhXf/hm6bEZVLbIMXxqCImL5C4nx1SMUmsL6w/FbJjnamYEopk2MKCPZHKtZOdxsbdUwpL30WRH85DQ0KbcG9LatEr+qLwf9adRQrozhh5zhoRXzjuH8nxS/GRkYuZgTt4wxNt7xYnCVlARS9/V15OeOGcRWw/Q/r++ipINz8ylGqUnTGImZrDZ2nhlOkBSNzrPA7NhCSw1OiGvZpg4zVj/gDkSkPNFn4oDFr1nNDqg0EPFGVXDDI0KA7dpw2DhrJk1z8HgXw8PorPGP0mLnDl4i811KkCz6g6y+ETC6k1VtdB2jss0MCnD9HtxM0RS62yls6Bm5aMhoFjryOHgLHNrjiHfW2/lki421K6QlGp3a2ONkRk9zHiti3uTdtbxlz0kcu7Z8FT045lHNZX0B6QpPiLi2sy7H/dItqAGdWuY0lrGrddX1PpxCckBAZLO8/VEGGGkLQtzbxEXgF+EW0HJxURvUYUF2VCy+kaq86KrFzvSKS/evW/vj7Sq2rNbOCtnIy/rvIKAXU0bbR/1imuEiiMhKdiZku+jRfZZmpjKHoydba9SsHpuNGnR/sH40AIHv7Lv6q+z3mEI+X1YaOVAAlLYWExuHLLbWYjng2gEBIHwmuU=
+    - secure: RNZDzRXBhS98DMpa0QIKQjL8Nl7Pbo6cYtPyaMjEgF2nv+W+gwhcyDDRUE4psJm26Qkz3AZNfLx/kGKPhhAjBpuGFreCbAFy3uDfbDdcn2K68E+yRSdBAoTIKlxVPpQR11hfPHiAs+3s4BIwLGnuwJSK3JMisboji4ceaxVQpdo0ZcJnNKykN2zabUl+8BW8SYQ8cYp/DLg+wSeqq7eplyYD7zoT/GGnSNylkrRsJxB5zlrRQC/ngUfK7AuxhkfQ14dsdWkkrx0RyVFul5VAc85qAbrtJvLZs2Cu/J3ohNzcRZG7m8+U4diHuIlBFx0ezL3hVBfXkOf74dP8+OnL3rAr/1n+dczl5/5mQqlSsy8UAtUtfdAtd+wRNRy5d+er1YuJBWOGs2SXInjNViEY1Phgs6bY/Lu3wiIxDJH0TORan6ZVSje2/vi7aegRoiqHNrs4m2JuQDCPXu53HKh22+nWgRLLXFT2oBN3FdCz3xj04t+LyT+P5uq9q0jXxKc1nlNpvF3nDzhIuJKcfgBRNm9Wt1vz04xzSRgZEFGMTRWkYTdV+0ZVeqEQjEPo4fRNJ6PT1Tem8VqIoHEKGivGkwiAZ6FhQ/TNkVD7tv5Vhq7eK3ZPXDRakuBsLJ5Nc9QnLCpoEqbuIYqjr8ODKV2HSjS16VaGPbvtYPWzhGKU9C4=
+  matrix:
+    - NODE_VERSION="4.0.0"
+    - NODE_VERSION="5.0.0"
+    - NODE_VERSION="6.0.0"
+    - NODE_VERSION="7.0.0"
+    - NODE_VERSION="8.0.0"
+    - NODE_VERSION="9.0.0"
+
+before_install:
+# use the correct version of node
+- rm -rf ~/.nvm/ && git clone --depth 1 https://github.com/creationix/nvm.git ~/.nvm
+- source ~/.nvm/nvm.sh
+- nvm install $NODE_VERSION
+- nvm use $NODE_VERSION
+# get commit message
+- COMMIT_MESSAGE=$(git show -s --format=%B $TRAVIS_COMMIT | tr -d '\n')
+# put local node-pre-gyp on PATH
+- export PATH=./node_modules/.bin/:$PATH
+# put global node-gyp and nan on PATH
+- npm install node-gyp -g
+# install aws-sdk so it is available for publishing
+- npm install aws-sdk nan typescript @types/node
+# figure out if we should publish or republish
+- PUBLISH_BINARY=false
+- REPUBLISH_BINARY=false
+# if we are building a tag then publish
+# - if [[ $TRAVIS_BRANCH == `git describe --tags --always HEAD` ]]; then PUBLISH_BINARY=true; fi;
+# or if we put [publish binary] in the commit message
+- if test "${COMMIT_MESSAGE#*'[publish binary]'}" != "$COMMIT_MESSAGE"; then PUBLISH_BINARY=true; fi;
+# alternativly we can [republish binary] which will replace any existing binary
+- if test "${COMMIT_MESSAGE#*'[republish binary]'}" != "$COMMIT_MESSAGE"; then PUBLISH_BINARY=true && REPUBLISH_BINARY=true; fi;
+install:
+# ensure source install works
+- npm install --build-from-source
+# test our module
+- node lib/node/index.js
+
+before_script:
+# if publishing, do it
+- if [[ $REPUBLISH_BINARY == true ]]; then node-pre-gyp package unpublish; fi;
+- if [[ $PUBLISH_BINARY == true ]]; then node-pre-gyp package publish; fi;
+# cleanup
+- node-pre-gyp clean
+- node-gyp clean
+
+script:
+# if publishing, test installing from remote
+- INSTALL_RESULT=0
+- if [[ $PUBLISH_BINARY == true ]]; then INSTALL_RESULT=$(npm install --fallback-to-build=false > /dev/null)$? || true; fi;
+# if install returned non zero (errored) then we first unpublish and then call false so travis will bail at this line
+- if [[ $INSTALL_RESULT != 0 ]]; then echo "returned $INSTALL_RESULT";node-pre-gyp unpublish;false; fi
+# If success then we arrive here so lets clean up
+- node-pre-gyp clean
+
+after_success:
+# if success then query and display all published binaries
+- node-pre-gyp info

+ 206 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/LICENSE

@@ -0,0 +1,206 @@
+THIS LICENSE GOVERNS THE SOURCE CODE, THE LIBRARIES, THE RESOURCE FILES, AS WELL
+AS THE HOTWORD MODEL snowboy/resources/snowboy.umdl PROVIDED IN THIS REPOSITORY.
+ALL OTHER HOTWORD MODELS ARE GOVERNED BY THEIR OWN LICENSES.
+
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

+ 12 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/MANIFEST.in

@@ -0,0 +1,12 @@
+recursive-include include *
+recursive-include lib *
+recursive-include swig/Python *
+recursive-include resources *
+include README.md
+
+exclude *.txt
+exclude *.pyc
+global-exclude .DS_Store _snowboydetect.so
+prune resources/alexa
+prune lib/ios
+prune lib/android

+ 436 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/README.md

@@ -0,0 +1,436 @@
+# Snowboy Hotword Detection
+
+by [KITT.AI](http://kitt.ai).
+
+[Home Page](https://snowboy.kitt.ai)
+
+[Full Documentation](http://docs.kitt.ai/snowboy) and [FAQ](http://docs.kitt.ai/snowboy#faq)
+
+[Discussion Group](https://groups.google.com/a/kitt.ai/forum/#!forum/snowboy-discussion) (or send email to snowboy-discussion@kitt.ai)
+
+[Commercial application FAQ](README_commercial.md)
+
+Version: 1.3.0 (2/19/2018)
+
+## Alexa support
+
+Snowboy now brings hands-free experience to the [Alexa AVS sample app](https://github.com/alexa/alexa-avs-sample-app) on Raspberry Pi! See more info below regarding the performance and how you can use other hotword models.
+
+**Performance**
+
+The performance of hotword detection usually depends on the actual environment, e.g., is it used with a quality microphone, is it used on the street, in a kitchen, or is there any background noise, etc. So we feel it is best for the users to evaluate it in their real environment. For the evaluation purpose, we have prepared an Android app which can be installed and run out of box: [SnowboyAlexaDemo.apk](https://github.com/Kitt-AI/snowboy/raw/master/resources/alexa/SnowboyAlexaDemo.apk) (please uninstall any previous versions first if you have installed this app before). 
+
+**Personal model**
+
+* Create your personal hotword model through our [website](https://snowboy.kitt.ai) or [hotword API](https://snowboy.kitt.ai/api/v1/train/)
+
+* Replace the hotword model in [Alexa AVS sample app](https://github.com/alexa/alexa-avs-sample-app) (after installation) with your personal model
+
+```
+# Please replace YOUR_PERSONAL_MODEL.pmdl with the personal model you just
+# created, and $ALEXA_AVS_SAMPLE_APP_PATH with the actual path where you
+# cloned the Alexa AVS sample app repository.
+cp YOUR_PERSONAL_MODEL.pmdl $ALEXA_AVS_SAMPLE_APP_PATH/samples/wakeWordAgent/ext/resources/alexa.umdl
+```
+
+* Set `APPLY_FRONTEND` to `false` and update `SENSITIVITY` in the [Alexa AVS sample app code](https://github.com/alexa/alexa-avs-sample-app/blob/master/samples/wakeWordAgent/src/KittAiSnowboyWakeWordEngine.cpp) and re-compile
+
+```
+# Please replace $ALEXA_AVS_SAMPLE_APP_PATH with the actual path where you
+# cloned the Alexa AVS sample app repository.
+cd $ALEXA_AVS_SAMPLE_APP_PATH/samples/wakeWordAgent/src/
+
+# Modify KittAiSnowboyWakeWordEngine.cpp and update SENSITIVITY at line 28.
+# Modify KittAiSnowboyWakeWordEngine.cpp and set APPLY_FRONTEND to false at
+# line 30.
+make
+```
+
+* Run the wake word agent with engine set to `kitt_ai`!
+
+**Universal model**
+
+* Replace the hotword model in [Alexa AVS sample app](https://github.com/alexa/alexa-avs-sample-app) (after installation) with your universal model
+
+```
+# Please replace YOUR_UNIVERSAL_MODEL.umdl with the personal model you just
+# created, and $ALEXA_AVS_SAMPLE_APP_PATH with the actual path where you
+# cloned the Alexa AVS sample app repository.
+cp YOUR_UNIVERSAL_MODEL.umdl $ALEXA_AVS_SAMPLE_APP_PATH/samples/wakeWordAgent/ext/resources/alexa.umdl
+```
+
+* Update `SENSITIVITY` in the [Alexa AVS sample app code](https://github.com/alexa/alexa-avs-sample-app/blob/master/samples/wakeWordAgent/src/KittAiSnowboyWakeWordEngine.cpp) and re-compile
+
+```
+# Please replace $ALEXA_AVS_SAMPLE_APP_PATH with the actual path where you
+# cloned the Alexa AVS sample app repository.
+cd $ALEXA_AVS_SAMPLE_APP_PATH/samples/wakeWordAgent/src/
+
+# Modify KittAiSnowboyWakeWordEngine.cpp and update SENSITIVITY at line 28.
+make
+```
+
+* Run the wake word agent with engine set to `kitt_ai`!
+
+
+## Hotword as a Service
+
+Snowboy now offers **Hotword as a Service** through the ``https://snowboy.kitt.ai/api/v1/train/``
+endpoint. Check out the [Full Documentation](http://docs.kitt.ai/snowboy) and example [Python/Bash script](examples/REST_API) (other language contributions are very welcome).
+
+As a quick start, ``POST`` to https://snowboy.kitt.ai/api/v1/train:
+
+	{
+	    "name": "a word",
+	    "language": "en",
+	    "age_group": "10_19",
+	    "gender": "F",
+	    "microphone": "mic type",
+	    "token": "<your auth token>",
+	    "voice_samples": [
+	        {wave: "<base64 encoded wave data>"},
+	        {wave: "<base64 encoded wave data>"},
+	        {wave: "<base64 encoded wave data>"}
+	    ]
+	}
+
+then you'll get a trained personal model in return!
+
+## Introduction
+
+Snowboy is a customizable hotword detection engine for you to create your own
+hotword like "OK Google" or "Alexa". It is powered by deep neural networks and
+has the following properties:
+
+* **highly customizable**: you can freely define your own magic phrase here –
+let it be “open sesame”, “garage door open”, or “hello dreamhouse”, you name it.
+
+* **always listening** but protects your privacy: Snowboy does not use Internet
+and does *not* stream your voice to the cloud.
+
+* light-weight and **embedded**: it even runs on a Raspberry Pi and consumes
+less than 10% CPU on the weakest Pi (single-core 700MHz ARMv6).
+
+* Apache licensed!
+
+Currently Snowboy supports (look into the [lib](lib) folder):
+
+* all versions of Raspberry Pi (with Raspbian based on Debian Jessie 8.0)
+* 64bit Mac OS X
+* 64bit Ubuntu 14.04
+* iOS
+* Android
+* ARM64 (aarch64, Ubuntu 16.04)
+
+It ships in the form of a **C++ library** with language-dependent wrappers
+generated by SWIG. We welcome wrappers for new languages -- feel free to send a
+pull request!
+
+Currently we have built wrappers for:
+
+* C/C++
+* Java/Android
+* Go (thanks to @brentnd and @deadprogram)
+* Node (thanks to @evancohen and @nekuz0r)
+* Perl (thanks to @iboguslavsky)
+* Python2/Python3
+* iOS/Swift3 (thanks to @grimlockrocks)
+* iOS/Object-C (thanks to @patrickjquinn)
+
+If you want support on other hardware/OS, please send your request to
+[snowboy@kitt.ai](mailto:snowboy.kitt.ai)
+
+Note: **Snowboy does not support Windows** yet. Please build Snowboy on *nix platforms.
+
+## Pricing for Snowboy models
+
+Hackers: free
+
+* Personal use
+* Community support
+
+Business: please contact us at [snowboy@kitt.ai](mailto:snowboy@kitt.ai)
+
+* Personal use
+* Commercial license
+* Technical support
+
+## Pretrained universal models
+
+We provide pretrained universal models for testing purpose. When you test those
+models, bear in mind that they may not be optimized for your specific device or
+environment.
+
+Here is the list of the models, and the parameters that you have to use for them:
+
+* **resources/alexa/alexa-avs-sample-app/alexa.umdl**: Universal model for the hotword "Alexa" optimized for [Alexa AVS sample app](https://github.com/alexa/alexa-avs-sample-app). Set SetSensitivity to 0.6, and set ApplyFrontend to true. This is so far the best "Alexa" model we released publicly, when ApplyFrontend is set to true.
+* **resources/models/snowboy.umdl**: Universal model for the hotword "Snowboy". Set SetSensitivity to 0.5 and ApplyFrontend to false.
+* **resources/models/jarvis.umdl**: Universal model for the hotword "Jarvis" (https://snowboy.kitt.ai/hotword/29). It has two different models for the hotword Jarvis, so you have to use two sensitivites. Set sensitivities to "0.8,0.80" and ApplyFrontend to true.
+* **resources/models/smart_mirror.umdl**: Universal model for the hotword "Smart Mirror" (https://snowboy.kitt.ai/hotword/47). Set sensitivity to Sensitivity to 0.5, and ApplyFrontend to false.
+
+## Precompiled node module
+
+Snowboy is available in the form of a native node module precompiled for:
+64 bit Ubuntu, MacOS X, and the Raspberry Pi (Raspbian 8.0+). For quick
+installation run:
+
+    npm install --save snowboy
+
+For sample usage see the `examples/Node` folder. You may have to install
+dependencies like `fs`, `wav` or `node-record-lpcm16` depending on which script
+you use.
+
+## Precompiled Binaries with Python Demo
+* 64 bit Ubuntu [14.04](https://s3-us-west-2.amazonaws.com/snowboy/snowboy-releases/ubuntu1404-x86_64-1.3.0.tar.bz2)
+* [MacOS X](https://s3-us-west-2.amazonaws.com/snowboy/snowboy-releases/osx-x86_64-1.3.0.tar.bz2)
+* Raspberry Pi with Raspbian 8.0, all versions
+  ([1/2/3/Zero](https://s3-us-west-2.amazonaws.com/snowboy/snowboy-releases/rpi-arm-raspbian-8.0-1.3.0.tar.bz2))
+  
+If you want to compile a version against your own environment/language, read on.
+
+## Dependencies
+
+To run the demo you will likely need the following, depending on which demo you
+use and what platform you are working with:
+
+* SoX (audio conversion)
+* PortAudio or PyAudio (audio capturing)
+* SWIG 3.0.10 or above (compiling Snowboy for different languages/platforms)
+* ATLAS or OpenBLAS (matrix computation)
+
+You can also find the exact commands you need to install the dependencies on
+Mac OS X, Ubuntu or Raspberry Pi below.
+
+### Mac OS X
+
+`brew` install `swig`, `sox`, `portaudio` and its Python binding `pyaudio`:
+
+    brew install swig portaudio sox
+    pip install pyaudio
+
+If you don't have Homebrew installed, please download it [here](http://brew.sh/). If you don't have `pip`, you can install it [here](https://pip.pypa.io/en/stable/installing/).
+
+Make sure that you can record audio with your microphone:
+
+    rec t.wav
+
+### Ubuntu/Raspberry Pi/Pine64/Nvidia Jetson TX1/Nvidia Jetson TX2
+
+First `apt-get` install `swig`, `sox`, `portaudio` and its Python binding `pyaudio`:
+
+    sudo apt-get install swig3.0 python-pyaudio python3-pyaudio sox
+    pip install pyaudio
+    
+Then install the `atlas` matrix computing library:
+
+    sudo apt-get install libatlas-base-dev
+    
+Make sure that you can record audio with your microphone:
+
+    rec t.wav
+        
+If you need extra setup on your audio (especially on a Raspberry Pi), please see the [full documentation](http://docs.kitt.ai/snowboy).
+
+## Compile a Node addon
+Compiling a node addon for Linux and the Raspberry Pi requires the installation of the following dependencies:
+
+    sudo apt-get install libmagic-dev libatlas-base-dev
+
+Then to compile the addon run the following from the root of the snowboy repository:
+
+    npm install
+    ./node_modules/node-pre-gyp/bin/node-pre-gyp clean configure build
+
+## Compile a Java Wrapper
+
+    # Make sure you have JDK installed.
+    cd swig/Java
+    make
+
+SWIG will generate a directory called `java` which contains converted Java wrappers and a directory called `jniLibs` which contains the JNI library.
+
+To run the Java example script:
+
+    cd examples/Java
+    make run
+
+## Compile a Python Wrapper
+
+    cd swig/Python
+    make
+
+SWIG will generate a `_snowboydetect.so` file and a simple (but hard-to-read) python wrapper `snowboydetect.py`. We have provided a higher level python wrapper `snowboydecoder.py` on top of that.
+    
+Feel free to adapt the `Makefile` in `swig/Python` to your own system's setting if you cannot `make` it.
+
+## Compile a GO Wrapper
+
+	cd examples/Go
+	go get github.com/Kitt-AI/snowboy/swig/Go
+	go build -o snowboy main.go
+	./snowboy ../../resources/snowboy.umdl ../../resources/snowboy.wav
+	
+Expected Output:
+
+```
+Snowboy detecting keyword in ../../resources/snowboy.wav
+Snowboy detected keyword  1
+```
+
+For more, please read `examples/Go/readme.md`.
+
+## Compile a Perl Wrapper
+
+    cd swig/Perl
+    make
+
+The Perl examples include training personal hotword using the KITT.AI RESTful APIs, adding Google Speech API after the hotword detection, etc. To run the examples, do the following
+
+    cd examples/Perl
+
+    # Install cpanm, if you don't already have it.
+    curl -L https://cpanmin.us | perl - --sudo App::cpanminus
+
+    # Install the dependencies. Note, on Linux you will have to install the
+    # PortAudio package first, using e.g.:
+    # apt-get install portaudio19-dev
+    sudo cpanm --installdeps .
+
+    # Run the unit test.
+    ./snowboy_unit_test.pl
+
+    # Run the personal model training example.
+    ./snowboy_RESTful_train.pl <API_TOKEN> <Hotword> <Language>
+
+    # Run the Snowboy Google Speech API example. By default it uses the Snowboy
+    # universal hotword.
+    ./snowboy_googlevoice.pl <Google_API_Key> [Hotword_Model]
+
+
+## Compile an iOS Wrapper
+
+Using Snowboy library in Objective-C does not really require a wrapper. It is basically the same as using C++ library in Objective-C. We have compiled a "fat" static library for iOS devices, see the library here `lib/ios/libsnowboy-detect.a`.
+
+To initialize Snowboy detector in Objective-C:
+
+    snowboy::SnowboyDetect* snowboyDetector = new snowboy::SnowboyDetect(
+        std::string([[[NSBundle mainBundle]pathForResource:@"common" ofType:@"res"] UTF8String]),
+        std::string([[[NSBundle mainBundle]pathForResource:@"snowboy" ofType:@"umdl"] UTF8String]));
+    snowboyDetector->SetSensitivity("0.45");        // Sensitivity for each hotword
+    snowboyDetector->SetAudioGain(2.0);             // Audio gain for detection
+
+To run hotword detection in Objective-C:
+
+    int result = snowboyDetector->RunDetection(buffer[0], bufferSize);  // buffer[0] is a float array
+
+You may want to play with the frequency of the calls to `RunDetection()`, which controls the CPU usage and the detection latency.
+
+Thanks to @patrickjquinn and @grimlockrocks, we now have examples of using Snowboy in both Objective-C and Swift3. Check out the examples at `examples/iOS/`, and the screenshots below!
+
+<img src=https://s3-us-west-2.amazonaws.com/kittai-cdn/Snowboy/Obj-C_Demo_02172017.png alt="Obj-C Example" width=300 /> <img src=https://s3-us-west-2.amazonaws.com/kittai-cdn/Snowboy/Swift3_Demo_02172017.png alt="Swift3 Example" width=300 />
+
+
+## Compile an Android Wrapper
+
+Full README and tutorial is in [Android README](examples/Android/README.md) and here's a screenshot:
+
+<img src="https://s3-us-west-2.amazonaws.com/kittai-cdn/Snowboy/SnowboyAlexaDemo-Andriod.jpeg" alt="Android Alexa Demo" width=300 />
+
+We have prepared an Android app which can be installed and run out of box: [SnowboyAlexaDemo.apk](https://github.com/Kitt-AI/snowboy/raw/master/resources/alexa/SnowboyAlexaDemo.apk) (please uninstall any previous one first if you installed this app before).
+
+## Quick Start for Python Demo
+
+Go to the `examples/Python` folder and open your python console:
+
+    In [1]: import snowboydecoder
+    
+    In [2]: def detected_callback():
+       ....:     print "hotword detected"
+       ....:
+    
+    In [3]: detector = snowboydecoder.HotwordDetector("resources/snowboy.umdl", sensitivity=0.5, audio_gain=1)
+    
+    In [4]: detector.start(detected_callback)
+    
+Then speak "snowboy" to your microphone to see whetheer Snowboy detects you.
+
+The `snowboy.umdl` file is a "universal" model that detect different people speaking "snowboy". If you want other hotwords, please go to [snowboy.kitt.ai](https://snowboy.kitt.ai) to record, train and downloand your own personal model (a `.pmdl` file).
+
+When `sensitiviy` is higher, the hotword gets more easily triggered. But you might get more false alarms.
+
+`audio_gain` controls whether to increase (>1) or decrease (<1) input volume.
+
+Two demo files `demo.py` and `demo2.py` are provided to show more usages.
+
+Note: if you see the following error:
+
+    TypeError: __init__() got an unexpected keyword argument 'model_str'
+    
+You are probably using an old version of SWIG. Please upgrade. We have tested with SWIG version 3.0.7 and 3.0.8.
+
+## Advanced Usages & Demos
+
+See [Full Documentation](http://docs.kitt.ai/snowboy).
+
+## Change Log
+
+**v1.3.0, 2/19/2018**
+
+* Added Frontend processing for all platforms
+* Added `resources/models/smart_mirror.umdl` for https://snowboy.kitt.ai/hotword/47
+* Added `resources/models/jarvis.umdl` for https://snowboy.kitt.ai/hotword/29
+* Added README for Chinese
+* Cleaned up the supported platforms
+* Re-structured the model path
+
+**v1.2.0, 3/25/2017**
+
+* Added better Alexa model for [Alexa AVS sample app](https://github.com/alexa/alexa-avs-sample-app)
+* New decoder that works well for short hotwords like Alexa
+
+**v1.1.1, 3/24/2017**
+
+* Added Android demo
+* Added iOS demos
+* Added Samsung Artik support
+* Added Go support
+* Added Intel Edison support
+* Added Pine64 support
+* Added Perl Support
+* Added a more robust "Alexa" model (umdl)
+* Offering Hotword as a Service through ``/api/v1/train`` endpoint.
+* Decoder is not changed.
+
+**v1.1.0, 9/20/2016**
+
+* Added library for Node.
+* Added support for Python3.
+* Added universal model `alexa.umdl`
+* Updated universal model `snowboy.umdl` so that it works in noisy environment.
+
+**v1.0.4, 7/13/2016**
+
+* Updated universal `snowboy.umdl` model to make it more robust.
+* Various improvements to speed up the detection.
+* Bug fixes.
+
+**v1.0.3, 6/4/2016**
+
+* Updated universal `snowboy.umdl` model to make it more robust in non-speech environment.
+* Fixed bug when using float as input data.
+* Added library support for Android ARMV7 architecture.
+* Added library for iOS.
+
+**v1.0.2, 5/24/2016**
+
+* Updated universal `snowboy.umdl` model
+* added C++ examples, docs will come in next release.
+
+**v1.0.1, 5/16/2016**
+
+* VAD now returns -2 on silence, -1 on error, 0 on voice and >0 on triggered models
+* added static library for Raspberry Pi in case people want to compile themselves instead of using the binary version
+
+**v1.0.0, 5/10/2016**
+
+* initial release

+ 422 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/README_ZH_CN.md

@@ -0,0 +1,422 @@
+# Snowboy 唤醒词检测
+
+[KITT.AI](http://kitt.ai)出品。
+
+[Home Page](https://snowboy.kitt.ai)
+
+[Full Documentation](http://docs.kitt.ai/snowboy) 和 [FAQ](http://docs.kitt.ai/snowboy#faq)
+
+[Discussion Group](https://groups.google.com/a/kitt.ai/forum/#!forum/snowboy-discussion) (或者发送邮件给 snowboy-discussion@kitt.ai)
+
+(因为我们每天都会收到很多消息,从2016年9月开始建立了讨论组。请在这里发送一般性的讨论。关于错误,请使用Github问题标签。)
+
+版本:1.3.0(2/19/2018)
+
+## Alexa支持
+
+Snowboy现在为运行在Raspberry Pi上的[Alexa AVS sample app](https://github.com/alexa/alexa-avs-sample-app)提供了hands-free的体验!有关性能以及如何使用其他唤醒词模型,请参阅下面的信息。
+
+**性能**
+
+唤醒检测的性能通常依赖于实际的环境,例如,它是否与高质量麦克风一起使用,是否在街道上,在厨房中,是否有背景噪音等等. 所以对于性能,我们觉得最好是在使用者真实的环境中进行评估。为了方便评估,我们准备了一个可以直接安装训醒的Android应用程序:[SnowboyAlexaDemo.apk](https://github.com/Kitt-AI/snowboy/raw/master/resources/alexa/SnowboyAlexaDemo.apk) (如果您之前安装了此应用程序,请先卸载它) 。
+
+**个人模型**
+
+* 用以下方式创建您的个人模型:[website](https://snowboy.kitt.ai) 或者 [hotword API](https://snowboy.kitt.ai/api/v1/train/)
+* 将[Alexa AVS sample app](https://github.com/alexa/alexa-avs-sample-app)(安装后)的唤醒词模型替换为您的个人模型
+
+```
+# Please replace YOUR_PERSONAL_MODEL.pmdl with the personal model you just
+# created, and $ALEXA_AVS_SAMPLE_APP_PATH with the actual path where you
+# cloned the Alexa AVS sample app repository.
+cp YOUR_PERSONAL_MODEL.pmdl $ALEXA_AVS_SAMPLE_APP_PATH/samples/wakeWordAgent/ext/resources/alexa.umdl
+```
+
+* 在[Alexa AVS sample app code](https://github.com/alexa/alexa-avs-sample-app/blob/master/samples/wakeWordAgent/src/KittAiSnowboyWakeWordEngine.cpp)中设置 `APPLY_FRONTEND` 为 `false`,更新 `SENSITIVITY`,并重新编译
+
+```
+# Please replace $ALEXA_AVS_SAMPLE_APP_PATH with the actual path where you
+# cloned the Alexa AVS sample app repository.
+cd $ALEXA_AVS_SAMPLE_APP_PATH/samples/wakeWordAgent/src/
+
+# Modify KittAiSnowboyWakeWordEngine.cpp and update SENSITIVITY at line 28.
+# Modify KittAiSnowboyWakeWordEngine.cpp and set APPLY_FRONTEND to false at
+# line 30.
+make
+```
+
+* 运行程序,并且把唤醒词引擎设置为`kitt_ai`
+
+
+**通用模型**
+
+* 将[Alexa AVS sample app](https://github.com/alexa/alexa-avs-sample-app)(安装后)的唤醒词模型替换为您的通用模型
+
+```
+# Please replace YOUR_UNIVERSAL_MODEL.umdl with the personal model you just
+# created, and $ALEXA_AVS_SAMPLE_APP_PATH with the actual path where you
+# cloned the Alexa AVS sample app repository.
+cp YOUR_UNIVERSAL_MODEL.umdl $ALEXA_AVS_SAMPLE_APP_PATH/samples/wakeWordAgent/ext/resources/alexa.umdl
+```
+
+* 在[Alexa AVS sample app code](https://github.com/alexa/alexa-avs-sample-app/blob/master/samples/wakeWordAgent/src/KittAiSnowboyWakeWordEngine.cpp) 中更新 `SENSITIVITY`, 并重新编译
+
+```
+# Please replace $ALEXA_AVS_SAMPLE_APP_PATH with the actual path where you
+# cloned the Alexa AVS sample app repository.
+cd $ALEXA_AVS_SAMPLE_APP_PATH/samples/wakeWordAgent/src/
+
+# Modify KittAiSnowboyWakeWordEngine.cpp and update SENSITIVITY at line 28.
+make
+```
+
+* 运行程序,并且把唤醒词引擎设置为`kitt_ai`
+
+
+## 个人唤醒词训练服务
+
+Snowboy现在通过 `https://snowboy.kitt.ai/api/v1/train/` 端口提供 **个人唤醒词训练服务**, 请查看[Full Documentation](http://docs.kitt.ai/snowboy)和示例[Python/Bash script](examples/REST_API)(非常欢迎贡献其他的语言)。
+
+简单来说,`POST` 下面代码到https://snowboy.kitt.ai/api/v1/train:
+
+        {
+            "name": "a word",
+            "language": "en",
+            "age_group": "10_19",
+            "gender": "F",
+            "microphone": "mic type",
+            "token": "<your auth token>",
+            "voice_samples": [
+                {wave: "<base64 encoded wave data>"},
+                {wave: "<base64 encoded wave data>"},
+                {wave: "<base64 encoded wave data>"}
+            ]
+        }
+
+然后您会获得一个训练好的个人模型!
+
+
+## 介绍
+
+Snowboy是一款可定制的唤醒词检测引擎,可为您创建像 "OK Google" 或 "Alexa" 这样的唤醒词。Snowboy基于神经网络,具有以下特性:
+
+* **高度可定制**:您可以自由定义自己的唤醒词 - 
+比如说“open sesame”,“garage door open”或 “hello dreamhouse”等等。
+
+* **总是在监听** 但保护您的个人隐私:Snowboy不使用互联网,不会将您的声音传输到云端。
+
+* **轻量级和嵌入式的**:它可以轻松在Raspberry Pi上运行,甚至在最弱的Pi(单核700MHz ARMv6)上,Snowboy占用的CPU也少于10%。
+
+* Apache授权!
+
+目前Snowboy支持(查看lib文件夹):
+
+* 所有版本的Raspberry Pi(Raspbian基于Debian Jessie 8.0)
+* 64位Mac OS X
+* 64位Ubuntu 14.04
+* iOS
+* Android
+* ARM64(aarch64,Ubuntu 16.04)
+
+Snowboy底层库由C++写成,通过swig被封装成能在多种操作系统和语言上使用的软件库。我们欢迎新语言的封装,请随时发送你们的Pull Request!
+
+目前我们已经现实封装的有:
+
+* C/C++
+* Java / Android
+* Go(thanks to @brentnd and @deadprogram)
+* Node(thanks to @evancohen和@ nekuz0r)
+* Perl(thanks to @iboguslavsky)
+* Python2/Python3
+* iOS / Swift3(thanks to @grimlockrocks)
+* iOS / Object-C(thanks to @patrickjquinn)
+
+如果您想要支持其他硬件或操作系统,请将您的请求发送至[snowboy@kitt.ai](mailto:snowboy.kitt.ai)
+
+注意:**Snowboy还不支持Windows** 。请在 *nix平台上编译Snowboy。
+
+## Snowboy模型的定价
+
+黑客:免费
+
+* 个人使用
+* 社区支持
+
+商业:请通过[snowboy@kitt.ai](mailto:snowboy@kitt.ai)与我们联系
+
+* 个人使用
+* 商业许可证
+* 技术支持
+
+## 预训练的通用模型
+
+为了测试方便,我们提供一些事先训练好的通用模型。当您测试那些模型时,请记住他们可能没有为您的特定设备或环境进行过优化。
+
+以下是模型列表和您必须使用的参数:
+
+* **resources/alexa/alexa-avs-sample-app/alexa.umdl**:这个是为[Alexa AVS sample app](https://github.com/alexa/alexa-avs-sample-app)优化过的唤醒词为“Alexa”的通用模型,将`SetSensitivity`设置为`0.6`,并将`ApplyFrontend`设置为true。当`ApplyFrontend`设置为`true`时,这是迄今为止我们公开发布的最好的“Alexa”的模型。
+* **resources/models/snowboy.umdl**:唤醒词为“snowboy”的通用模型。将`SetSensitivity`设置为`0.5`,`ApplyFrontend`设置为`false`。
+* **resources/models/jarvis.umdl**: 唤醒词为“Jarvis” (https://snowboy.kitt.ai/hotword/29)的通用模型,其中包含了对应于“Jarvis”的两个唤醒词模型,所以需要设置两个`sensitivity`。将`SetSensitivity`设置为`0.8,0.8`,`ApplyFrontend`设置为`true`。
+* **resources/models/smart_mirror.umdl**: 唤醒词为“Smart Mirror” (https://snowboy.kitt.ai/hotword/47)的通用模型。将`SetSensitivity`设置为`0.5`,`ApplyFrontend`设置为`false`。
+
+## 预编译node模块
+
+Snowboy为一下平台编译了node模块:64位Ubuntu,MacOS X和Raspberry Pi(Raspbian 8.0+)。快速安装运行:
+
+    npm install --save snowboy
+
+有关示例用法,请参阅examples/Node文件夹。根据您使用的脚本,可能需要安装依赖关系库例如fs,wav或node-record-lpcm16。
+
+## 预编译Python Demo的二进制文件
+* 64 bit Ubuntu [12.04](https://s3-us-west-2.amazonaws.com/snowboy/snowboy-releases/ubuntu1204-x86_64-1.2.0.tar.bz2)
+  / [14.04](https://s3-us-west-2.amazonaws.com/snowboy/snowboy-releases/ubuntu1404-x86_64-1.3.0.tar.bz2)
+* [MacOS X](https://s3-us-west-2.amazonaws.com/snowboy/snowboy-releases/osx-x86_64-1.3.0.tar.bz2)
+* Raspberry Pi with Raspbian 8.0, all versions
+  ([1/2/3/Zero](https://s3-us-west-2.amazonaws.com/snowboy/snowboy-releases/rpi-arm-raspbian-8.0-1.3.0.tar.bz2))
+* Pine64 (Debian Jessie 8.5 (3.10.102)), Nvidia Jetson TX1 and Nvidia Jetson TX2 ([download](https://s3-us-west-2.amazonaws.com/snowboy/snowboy-releases/pine64-debian-jessie-1.2.0.tar.bz2))
+* Intel Edison (Ubilinux based on Debian Wheezy 7.8) ([download](https://s3-us-west-2.amazonaws.com/snowboy/snowboy-releases/edison-ubilinux-1.2.0.tar.bz2))
+
+如果您要根据自己的环境/语言编译版本,请继续阅读。
+
+## 依赖
+
+要运行demo,您可能需要以下内容,具体取决于您使用的示例和您正在使用的平台:
+
+* SoX(音频转换)
+* PortAudio或PyAudio(音频录音)
+* SWIG 3.0.10或以上(针对不同语言/平台编译Snowboy)
+* ATLAS或OpenBLAS(矩阵计算)
+
+在下面您还可以找到在Mac OS X,Ubuntu或Raspberry Pi上安装依赖关系所需的确切命令。
+
+### Mac OS X
+
+`brew` 安装 `swig`,`sox`,`portaudio` 和绑定了 `pyaudio`的Python:
+
+    brew install swig portaudio sox
+    pip install pyaudio
+
+如果您没有安装Homebrew,请在这里[here](http://brew.sh/)下载。如果没有pip,可以在这里[here](https://pip.pypa.io/en/stable/installing/)安装。
+
+确保您可以用麦克风录制音频:
+
+    rec t.wav
+
+### Ubuntu / Raspberry Pi / Pine64 / Nvidia Jetson TX1 / Nvidia Jetson TX2
+
+首先 `apt-get` 安装 `swig`,`sox`,`portaudio`和绑定了 `pyaudio` 的 Python:
+
+    sudo apt-get install swig3.0 python-pyaudio python3-pyaudio sox
+    pip install pyaudio
+
+然后安装 `atlas` 矩阵计算库:
+
+    sudo apt-get install libatlas-base-dev
+
+确保您可以用麦克风录制音频:
+
+    rec t.wav
+
+如果您需要额外设置您的音频(特别是Raspberry Pi),请参阅[full documentation](http://docs.kitt.ai/snowboy)。
+
+## 编译Node插件
+
+为Linux和Raspberry Pi编译node插件需要安装以下依赖项:
+
+    sudo apt-get install libmagic-dev libatlas-base-dev
+
+然后编译插件,从snowboy代码库的根目录运行以下内容:
+
+    npm install
+    ./node_modules/node-pre-gyp/bin/node-pre-gyp clean configure build
+
+## 编译Java Wrapper
+
+    # Make sure you have JDK installed.
+    cd swig/Java
+    make
+
+SWIG将生成一个包含转换成Java封装的`java`目录和一个包含JNI库的`jniLibs`目录。
+
+运行Java示例脚本:
+
+    cd examples/Java
+    make run
+
+## 编译Python Wrapper
+
+    cd swig/Python
+    make
+
+SWIG将生成一个_snowboydetect.so文件和一个简单(但难以阅读)的python 封装snowboydetect.py。我们已经提供了一个更容易读懂的python封装snowboydecoder.py。
+
+如果不能make,请适配`swig/Python`中的Makefile到您自己的系统设置。
+
+## 编译GO Warpper
+
+      cd examples/Go
+      go get github.com/Kitt-AI/snowboy/swig/Go
+      go build -o snowboy main.go
+      ./snowboy ../../resources/snowboy.umdl ../../resources/snowboy.wav
+
+期望输出:
+
+    Snowboy detecting keyword in ../../resources/snowboy.wav
+    Snowboy detected keyword  1
+
+
+更多细节,请阅读 'examples/Go/readme.md'。
+
+## 编译Perl wrapper
+
+    cd swig/Perl
+    make
+
+Perl示例包括使用KITT.AI RESTful API训练个人唤醒词,在检测到唤醒之后添加Google Speech API等。要运行示例,请执行以下操作
+
+    cd examples/Perl
+
+    # Install cpanm, if you don't already have it.
+    curl -L https://cpanmin.us | perl - --sudo App::cpanminus
+
+    # Install the dependencies. Note, on Linux you will have to install the
+    # PortAudio package first, using e.g.:
+    # apt-get install portaudio19-dev
+    sudo cpanm --installdeps .
+
+    # Run the unit test.
+    ./snowboy_unit_test.pl
+
+    # Run the personal model training example.
+    ./snowboy_RESTful_train.pl <API_TOKEN> <Hotword> <Language>
+
+    # Run the Snowboy Google Speech API example. By default it uses the Snowboy
+    # universal hotword.
+    ./snowboy_googlevoice.pl <Google_API_Key> [Hotword_Model]
+
+## 编译iOS wrapper
+
+在Objective-C中使用Snowboy库不需要封装. 它与Objective-C中使用C++库基本相同. 我们为iOS设备编写了一个 "fat" 静态库,请参阅这里的库`lib/ios/libsnowboy-detect.a`。
+
+在Objective-C中初始化Snowboy检测器:
+
+    snowboy::SnowboyDetect* snowboyDetector = new snowboy::SnowboyDetect(
+        std::string([[[NSBundle mainBundle]pathForResource:@"common" ofType:@"res"] UTF8String]),
+        std::string([[[NSBundle mainBundle]pathForResource:@"snowboy" ofType:@"umdl"] UTF8String]));
+    snowboyDetector->SetSensitivity("0.45");        // Sensitivity for each hotword
+    snowboyDetector->SetAudioGain(2.0);             // Audio gain for detection
+
+在Objective-C中运行唤醒词检测:
+
+    int result = snowboyDetector->RunDetection(buffer[0], bufferSize);  // buffer[0] is a float array
+
+您可能需要按照一定的频率调用RunDetection(),从而控制CPU使用率和检测延迟。
+
+感谢@patrickjquinn和@grimlockrocks,我们现在有了在Objective-C和Swift3中使用Snowboy的例子。看看下面的例子`examples/iOS/`和下面的截图!
+
+<img src=https://s3-us-west-2.amazonaws.com/kittai-cdn/Snowboy/Obj-C_Demo_02172017.png alt="Obj-C Example" width=300 /> <img src=https://s3-us-west-2.amazonaws.com/kittai-cdn/Snowboy/Swift3_Demo_02172017.png alt="Swift3 Example" width=300 />
+
+# 编译Android Wrapper
+
+完整的README和教程在[Android README](examples/Android/README.md),这里是一个截图:
+
+<img src="https://s3-us-west-2.amazonaws.com/kittai-cdn/Snowboy/SnowboyAlexaDemo-Andriod.jpeg" alt="Android Alexa Demo" width=300 />
+
+我们准备了一个可以安装并运行的Android应用程序:[SnowboyAlexaDemo.apk](https://github.com/Kitt-AI/snowboy/raw/master/resources/alexa/SnowboyAlexaDemo.apk)(如果您之前安装了此应用程序,请先卸载它们)。
+
+## Python demo快速入门
+
+进入 `examples/Python` 文件夹并打开你的python控制台:
+
+    In [1]: import snowboydecoder
+
+    In [2]: def detected_callback():
+       ....:     print "hotword detected"
+       ....:
+
+    In [3]: detector = snowboydecoder.HotwordDetector("resources/snowboy.umdl", sensitivity=0.5, audio_gain=1)
+
+    In [4]: detector.start(detected_callback)
+
+然后对你的麦克风说"snowboy",看看是否Snowboy检测到你。
+
+这个 `snowboy.umdl` 文件是一个 "通用" 模型,可以检测不同的人说 "snowboy" 。 如果你想要其他的唤醒词,请去[snowboy.kitt.ai](https://snowboy.kitt.ai)录音,训练和下载你自己的个人模型(一个.pmdl文件)。
+
+当 `sensitiviy` 设置越高,唤醒越容易触发。但是你也可能会收到更多的误唤醒。
+
+`audio_gain` 控制是否增加(> 1)或降低(<1)输入音量。
+
+我们提供了两个演示文件 `demo.py`, `demo2.py` 以显示更多的用法。
+
+注意:如果您看到以下错误:
+
+    TypeError: __init__() got an unexpected keyword argument 'model_str'
+
+您可能正在使用旧版本的SWIG. 请升级SWIG。我们已经测试过SWIG 3.0.7和3.0.8。
+
+## 高级用法与演示
+
+请参阅[Full Documentation](http://docs.kitt.ai/snowboy)。
+
+## 更改日志
+
+**v1.3.0, 2/19/2018**
+
+* 添加前端处理到所有平台
+* 添加`resources/models/smart_mirror.umdl` 给 https://snowboy.kitt.ai/hotword/47
+* 添加`resources/models/jarvis.umdl` 给 https://snowboy.kitt.ai/hotword/29
+* 添加中文文档
+* 清理支持的平台
+* 重新定义了模型路径
+
+**v1.2.0, 3/25/2017**
+
+* 为[Alexa AVS sample app](https://github.com/alexa/alexa-avs-sample-app)添加更好的Alexa模型
+* 新的解码器,适用于像Alexa这样的简短的词条
+
+**v1.1.1, 3/24/2017**
+
+* 添加Android演示
+* 添加了iOS演示
+* 增加了三星Artik支持
+* 添加Go支持
+* 增加了英特尔爱迪生支持
+* 增加了Pine64的支持
+* 增加了Perl支持
+* 添加了更强大的“Alexa”模型(umdl)
+* 通过/api/v1/train终端提供Hotword即服务。
+* 解码器没有改变
+
+**v1.1.0, 9/20/2016**
+
+* 添加了Node的库
+* 增加了对Python3的支持
+* 增加了通用模型 alexa.umdl
+* 更新通用模型snowboy.umdl,使其在嘈杂的环境中工作
+
+**v1.0.4, 7/13/2016**
+
+* 更新通用snowboy.umdl模型,使其更加健壮
+* 各种改进加快检测
+* Bug修复
+
+**v1.0.3, 6/4/2016**
+
+* 更新的通用snowboy.umdl模型,使其在非语音环境中更加强大
+* 修正使用float作为输入数据时的错误
+* 为Android ARMV7架构增加了库支持
+* 为iOS添加了库
+
+**v1.0.2, 5/24/2016**
+
+* 更新通用snowboy.umdl模型
+* 添加C ++示例,文档将在下一个版本中
+
+**v1.0.1, 5/16/2016**
+
+* VAD现在返回-2为静音,-1为错误,0为语音,大于0为触发了唤醒
+* 添加了Raspberry Pi的静态库,以防人们想自己编译而不是使用二进制版本
+
+**v1.0.0, 5/10/2016**
+
+* 初始版本

+ 134 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/README_commercial.md

@@ -0,0 +1,134 @@
+# Common Questions for a Commercial Application
+
+You are looking for a way to put Snowboy in a commercial application. We have compiled a large collection of common questions from our customers all over the world in various industries. 
+
+
+## Universal models (paid) vs. personal models (free)
+
+Personal models:
+
+* are the models you downloaded from https://snowboy.kitt.ai or using our `/train` SaaS API.
+* are good for quick demos
+* are built with only 3 voice samples
+* are not noise robust and you'll get a lot of false alarms in real environment
+* only work on your own voice or a very similar voice, thus is speaker dependent
+* are free
+
+Universal models:
+
+* are built using a lot more voice samples (at least thousands)
+* take effort to collect those voice samples
+* take a lot of GPU time to train
+* are more robust against noise
+* are mostly speaker independent (with challenges on children's voice and accents)
+* cannot be built by yourself using the web interface or the SaaS API
+* cost you money
+
+### FAQ for universal & personal models
+
+Q: **If I record multiple times on snowboy.kitt.ai, can I improve the personal models?**  
+A: No. Personal models only take 3 voice samples to build. Each time you record new voices, the previous samples are overwritten and not used in your current model. 
+
+
+Q: **How can I get a universal model for free?**  
+A: The *one and only* way: Ask 500 people to log in to snowboy.kitt.ai, contribute their voice samples to a particular hotword, then ask us to build a universal model for that hotword.
+
+Q: **Can I use your API to collect voices from 500 people and increment the sample counter from snowboy.kitt.ai?**  
+A: No. The [SaaS](https://github.com/kitt-ai/snowboy#hotword-as-a-service) API is separate from the website.
+
+Q: **How long does it take to get a universal model?**  
+A: Usually a month.
+
+## Licensing
+
+
+### Explain your license again?
+
+Everything on Snowboy's GitHub repo is Apache licensed, including various sample applications and wrapper codes, though the Snowboy library is binary code compiled against different platforms. 
+
+With that said, if you built an application from https://github.com/kitt-ai/snowboy or personal models downloaded from https://snowboy.kitt.ai, you don't need to pay a penny.
+
+If you want to use a universal model with your own customized hotword, you'll need an **evaluation license** and a **commercial license**.
+
+### Evaluation license
+
+Each hotword is different. When you train a universal model with your own hotword, nobody can guarantee that it works on your system without any flaws. Thus you'll need to get an evaluation license first to test whether your universal model works for you.
+
+An evaluation license:
+
+* gives you a 90 day window to evaluate the universal model we build for you
+* costs you money
+
+**Warning: an evaluation license will expire after 90 days. Make sure you don't use the model with evaluation license in production systems.** Get a commercial license from us for your production system.
+
+#### Evaluation license FAQ
+
+Q: **How much does it cost?**  
+A: A few thousand dollars.
+
+Q: **Can I get a discount as a {startup, student, NGO}?**  
+A: No. Our pricing is already at least half of what others charge.
+
+Q: **How can you make sure your universal model works for me?**  
+A: We simply can't. However we have a few sample universal models from our GitHub [repo](https://github.com/Kitt-AI/snowboy/tree/master/resources), including "alexa.umdl", "snowboy.umdl", and "smart_mirror.umdl". The "alexa.umdl" model is enhanced with a lot more data and is not a typical case. So pay attention to test "snowboy.umdl" and "smart_mirror.umdl". They offer similar performance to your model.
+
+
+### Commercial license
+
+After evaluation, if you feel want to go with Snowboy, you'll need a commercial license to deploy it. We usually charge a flat fee per unit of hardware you sell.
+
+#### Commercial license FAQ
+
+Q: **Is it a one-time license or subscription-based license?**  
+A: It's a perpetual license for each device. Since the Snowboy library runs *offline* on your device, you can run it forever without worrying about any broken and dependent web services.
+
+Q: **What's your pricing structure?**  
+A: We have tiered pricing depending on your volume. We charge less if you sell more.
+
+Q: **Can you give me one example?**  
+A: For instance, if your product is a talking robot with a $300 price tag, and you sell at least 100,000 units per year, we'll probably charge you $1 per unit once you go over 100,000 units. If your product is a smart speaker with a $30 price tag, we won't charge you $1, but you'll have to sell a lot more to make the business sense to us.
+
+Q: **I plan to sell 1000 units a year, can I license your software for $1 per unit?**  
+A: No. In that way we only make $1000 a year, which is not worth the amount of time we put on your hotword.
+
+Q: **I make a cellphone app, not a hardware product, what's the pricing structure?**  
+A: Depends on how you generate revenue. For instance, if your app is priced at $1.99, we'll collect cents per paid user, assuming you have a large user base. If you only have 2000 paid users, we'll make a revenue of less than a hundred dollars and it won't make sense to us.
+
+
+### What's the process of getting a license?
+
+1. Make sure Snowboy can run on your system
+2. Reach out to us with your hotword name, commercial application, and target market
+3. Discuss with us about **commercial license** fee to make sure our pricing fits your budget
+4. Sign an evaluation contract, pay 50% of invoice
+5. We'll train a universal model for you and give you an **evaluation license** of 90 days
+6. Test the model and discuss how we can improve it
+7. If you decide to go with it, get a commercial license from us
+
+## General Questions
+
+### What language does Snowboy support?
+
+We support North American English and Chinese the best. We can deal with a bit of Indian accents as well. For other languages, we'll need to first listen to your hotword (please send us a few .wav voice samples) before we can engage.
+
+### How many voice samples do you need?
+
+Usually 1500 voice samples from 500 people to get started. The more the better. If your hotword is in English, we can collect the voice samples for you. Otherwise you'll need to collect it yourself and send to us.
+
+### What's the format on voice samples?
+
+16000Hz sample rate, 16 bit integer, mono channel, .wav files.
+
+### Does Snowboy do: AEC, VAD, Noise Suppression, Beam Forming?
+
+Snowboy has a weak support for VAD and noise suppression, as we found some customers would use Snowboy without a microphone array. Snowboy is not a audio frontend processing toolkit thus does not support AEC and beam forming.
+
+If your application wants to support far-field speech, i.e., verbal communication at least 3 feet away, you'll need a microphone array to enhance incoming speech and reduce noise. Please do not reply on Snowboy to do everything.
+
+### Can you compile Snowboy for my platform?
+
+If your platform is not listed [here](https://github.com/Kitt-AI/snowboy/tree/master/lib), and you want to get a commercial license from us, please contact us with your toolchain, hardware chip, RAM, OS, GCC/G++ version. Depending on the effort, we might charge an NRE fee for cross compiling.
+
+### Contact
+
+If this document doesn't cover what's needed, feel free to reach out to us at snowboy@kitt.ai

+ 75 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/binding.gyp

@@ -0,0 +1,75 @@
+{
+    'targets': [{
+        'target_name': 'snowboy',
+        'sources': [
+            'swig/Node/snowboy.cc'
+        ],
+        'conditions': [
+            ['OS=="mac"', {
+                'link_settings': {
+                    'libraries': [
+                        '<(module_root_dir)/lib/osx/libsnowboy-detect.a',
+                    ]
+                }
+            }],
+            ['OS=="linux" and target_arch=="x64"', {
+                'link_settings': {
+                    'ldflags': [
+                        '-Wl,--no-as-needed',
+                    ],
+                    'libraries': [
+                        '<(module_root_dir)/lib/ubuntu64/libsnowboy-detect.a',
+                    ]
+                }
+            }],
+            ['OS=="linux" and target_arch=="arm"', {
+                'link_settings': {
+                    'ldflags': [
+                        '-Wl,--no-as-needed',
+                    ],
+                    'libraries': [
+                        '<(module_root_dir)/lib/rpi/libsnowboy-detect.a',
+                    ]
+                }
+            }]
+        ],
+        'cflags': [
+            '-std=c++11',
+            '-fexceptions',
+            '-Wall',
+            '-D_GLIBCXX_USE_CXX11_ABI=0'
+        ],
+        'cflags!': [
+            '-fno-exceptions'
+        ],
+        'cflags_cc!': [
+            '-fno-exceptions'
+        ],
+        'include_dirs': [
+            "<!(node -e \"require('nan')\")",
+            "<!(pwd)/include"
+        ],
+        'libraries': [
+            '-lcblas'
+        ],
+        'xcode_settings': {
+            'MACOSX_DEPLOYMENT_TARGET': '10.11',
+            "GCC_ENABLE_CPP_EXCEPTIONS": "YES",
+            'OTHER_CFLAGS': [
+                '-std=c++11',
+                '-stdlib=libc++'
+            ]
+        }
+    },
+    {
+      "target_name": "action_after_build",
+      "type": "none",
+      "dependencies": [ "<(module_name)" ],
+      "copies": [
+        {
+          "files": [ "<(PRODUCT_DIR)/<(module_name).node" ],
+          "destination": "<(module_path)"
+        }
+      ]
+    }]
+}

+ 236 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C++/demo.cc

@@ -0,0 +1,236 @@
+// example/C++/demo.cc
+
+// Copyright 2016  KITT.AI (author: Guoguo Chen)
+
+#include <cassert>
+#include <csignal>
+#include <iostream>
+#include <pa_ringbuffer.h>
+#include <pa_util.h>
+#include <portaudio.h>
+#include <string>
+#include <vector>
+
+#include "include/snowboy-detect.h"
+
+int PortAudioCallback(const void* input,
+                      void* output,
+                      unsigned long frame_count,
+                      const PaStreamCallbackTimeInfo* time_info,
+                      PaStreamCallbackFlags status_flags,
+                      void* user_data);
+
+class PortAudioWrapper {
+ public:
+  // Constructor.
+  PortAudioWrapper(int sample_rate, int num_channels, int bits_per_sample) {
+    num_lost_samples_ = 0;
+    min_read_samples_ = sample_rate * 0.1;
+    Init(sample_rate, num_channels, bits_per_sample);
+  }
+
+  // Reads data from ring buffer.
+  template<typename T>
+  void Read(std::vector<T>* data) {
+    assert(data != NULL);
+
+    // Checks ring buffer overflow.
+    if (num_lost_samples_ > 0) {
+      std::cerr << "Lost " << num_lost_samples_ << " samples due to ring"
+          << " buffer overflow." << std::endl;
+      num_lost_samples_ = 0;
+    }
+
+    ring_buffer_size_t num_available_samples = 0;
+    while (true) {
+      num_available_samples =
+          PaUtil_GetRingBufferReadAvailable(&pa_ringbuffer_);
+      if (num_available_samples >= min_read_samples_) {
+        break;
+      }
+      Pa_Sleep(5);
+    }
+
+    // Reads data.
+    num_available_samples = PaUtil_GetRingBufferReadAvailable(&pa_ringbuffer_);
+    data->resize(num_available_samples);
+    ring_buffer_size_t num_read_samples = PaUtil_ReadRingBuffer(
+        &pa_ringbuffer_, data->data(), num_available_samples);
+    if (num_read_samples != num_available_samples) {
+      std::cerr << num_available_samples << " samples were available,  but "
+          << "only " << num_read_samples << " samples were read." << std::endl;
+    }
+  }
+
+  int Callback(const void* input, void* output,
+               unsigned long frame_count,
+               const PaStreamCallbackTimeInfo* time_info,
+               PaStreamCallbackFlags status_flags) {
+    // Input audio.
+    ring_buffer_size_t num_written_samples =
+        PaUtil_WriteRingBuffer(&pa_ringbuffer_, input, frame_count);
+    num_lost_samples_ += frame_count - num_written_samples;
+    return paContinue;
+  }
+
+  ~PortAudioWrapper() {
+    Pa_StopStream(pa_stream_);
+    Pa_CloseStream(pa_stream_);
+    Pa_Terminate();
+    PaUtil_FreeMemory(ringbuffer_);
+  }
+
+ private:
+  // Initialization.
+  bool Init(int sample_rate, int num_channels, int bits_per_sample) {
+    // Allocates ring buffer memory.
+    int ringbuffer_size = 16384;
+    ringbuffer_ = static_cast<char*>(
+        PaUtil_AllocateMemory(bits_per_sample / 8 * ringbuffer_size));
+    if (ringbuffer_ == NULL) {
+      std::cerr << "Fail to allocate memory for ring buffer." << std::endl;
+      return false;
+    }
+
+    // Initializes PortAudio ring buffer.
+    ring_buffer_size_t rb_init_ans =
+        PaUtil_InitializeRingBuffer(&pa_ringbuffer_, bits_per_sample / 8,
+                                    ringbuffer_size, ringbuffer_);
+    if (rb_init_ans == -1) {
+      std::cerr << "Ring buffer size is not power of 2." << std::endl;
+      return false;
+    }
+
+    // Initializes PortAudio.
+    PaError pa_init_ans = Pa_Initialize();
+    if (pa_init_ans != paNoError) {
+      std::cerr << "Fail to initialize PortAudio, error message is \""
+          << Pa_GetErrorText(pa_init_ans) << "\"" << std::endl;
+      return false;
+    }
+
+    PaError pa_open_ans;
+    if (bits_per_sample == 8) {
+      pa_open_ans = Pa_OpenDefaultStream(
+          &pa_stream_, num_channels, 0, paUInt8, sample_rate,
+          paFramesPerBufferUnspecified, PortAudioCallback, this);
+    } else if (bits_per_sample == 16) {
+      pa_open_ans = Pa_OpenDefaultStream(
+          &pa_stream_, num_channels, 0, paInt16, sample_rate,
+          paFramesPerBufferUnspecified, PortAudioCallback, this);
+    } else if (bits_per_sample == 32) {
+      pa_open_ans = Pa_OpenDefaultStream(
+          &pa_stream_, num_channels, 0, paInt32, sample_rate,
+          paFramesPerBufferUnspecified, PortAudioCallback, this);
+    } else {
+      std::cerr << "Unsupported BitsPerSample: " << bits_per_sample
+          << std::endl;
+      return false;
+    }
+    if (pa_open_ans != paNoError) {
+      std::cerr << "Fail to open PortAudio stream, error message is \""
+          << Pa_GetErrorText(pa_open_ans) << "\"" << std::endl;
+      return false;
+    }
+
+    PaError pa_stream_start_ans = Pa_StartStream(pa_stream_);
+    if (pa_stream_start_ans != paNoError) {
+      std::cerr << "Fail to start PortAudio stream, error message is \""
+          << Pa_GetErrorText(pa_stream_start_ans) << "\"" << std::endl;
+      return false;
+    }
+    return true;
+  }
+
+ private:
+  // Pointer to the ring buffer memory.
+  char* ringbuffer_;
+
+  // Ring buffer wrapper used in PortAudio.
+  PaUtilRingBuffer pa_ringbuffer_;
+
+  // Pointer to PortAudio stream.
+  PaStream* pa_stream_;
+
+  // Number of lost samples at each Read() due to ring buffer overflow.
+  int num_lost_samples_;
+
+  // Wait for this number of samples in each Read() call.
+  int min_read_samples_;
+};
+
+int PortAudioCallback(const void* input,
+                      void* output,
+                      unsigned long frame_count,
+                      const PaStreamCallbackTimeInfo* time_info,
+                      PaStreamCallbackFlags status_flags,
+                      void* user_data) {
+  PortAudioWrapper* pa_wrapper = reinterpret_cast<PortAudioWrapper*>(user_data);
+  pa_wrapper->Callback(input, output, frame_count, time_info, status_flags);
+  return paContinue;
+}
+
+void SignalHandler(int signal){
+  std::cerr << "Caught signal " << signal << ", terminating..." << std::endl;
+  exit(0);
+}
+
+int main(int argc, char* argv[]) {
+  std::string usage =
+      "Example that shows how to use Snowboy in C++. Parameters are\n"
+      "hard-coded in the parameter section. Please check the source code for\n"
+      "more details. Audio is captured by PortAudio.\n"
+      "\n"
+      "To run the example:\n"
+      "  ./demo\n";
+
+  // Checks the command.
+  if (argc > 1) {
+    std::cerr << usage;
+    exit(1);
+  }
+
+  // Configures signal handling.
+   struct sigaction sig_int_handler;
+   sig_int_handler.sa_handler = SignalHandler;
+   sigemptyset(&sig_int_handler.sa_mask);
+   sig_int_handler.sa_flags = 0;
+   sigaction(SIGINT, &sig_int_handler, NULL);
+
+  // Parameter section.
+  // If you have multiple hotword models (e.g., 2), you should set
+  // <model_filename> and <sensitivity_str> as follows:
+  //   model_filename =
+  //     "resources/models/snowboy.umdl,resources/models/smart_mirror.umdl";
+  //   sensitivity_str = "0.5,0.5";
+  std::string resource_filename = "resources/common.res";
+  std::string model_filename = "resources/models/snowboy.umdl";
+  std::string sensitivity_str = "0.5";
+  float audio_gain = 1;
+
+  // Initializes Snowboy detector.
+  snowboy::SnowboyDetect detector(resource_filename, model_filename);
+  detector.SetSensitivity(sensitivity_str);
+  detector.SetAudioGain(audio_gain);
+
+  // Initializes PortAudio. You may use other tools to capture the audio.
+  PortAudioWrapper pa_wrapper(detector.SampleRate(),
+                              detector.NumChannels(), detector.BitsPerSample());
+
+  // Runs the detection.
+  // Note: I hard-coded <int16_t> as data type because detector.BitsPerSample()
+  //       returns 16.
+  std::cout << "Listening... Press Ctrl+C to exit" << std::endl;
+  std::vector<int16_t> data;
+  while (true) {
+    pa_wrapper.Read(&data);
+    if (data.size() != 0) {
+      int result = detector.RunDetection(data.data(), data.size());
+      if (result > 0) {
+        std::cout << "Hotword " << result << " detected!" << std::endl;
+      }
+    }
+  }
+
+  return 0;
+}

+ 50 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C++/demo.mk

@@ -0,0 +1,50 @@
+TOPDIR := ../../
+DYNAMIC := True
+CC = $(CXX)
+CXX :=
+LDFLAGS :=
+LDLIBS :=
+PORTAUDIOINC := portaudio/install/include
+PORTAUDIOLIBS := portaudio/install/lib/libportaudio.a
+
+CXXFLAGS += -D_GLIBCXX_USE_CXX11_ABI=0
+
+ifeq ($(DYNAMIC), True)
+  CXXFLAGS += -fPIC
+endif
+
+ifeq ($(shell uname -m | cut -c 1-3), x86)
+  CXXFLAGS += -msse  -msse2
+endif
+
+ifeq ($(shell uname), Darwin)
+  # By default Mac uses clang++ as g++, but people may have changed their
+  # default configuration.
+  CXX := clang++
+  CXXFLAGS += -I$(TOPDIR) -Wall -Wno-sign-compare -Winit-self \
+      -DHAVE_POSIX_MEMALIGN -DHAVE_CLAPACK -I$(PORTAUDIOINC)
+  LDLIBS += -ldl -lm -framework Accelerate -framework CoreAudio \
+      -framework AudioToolbox -framework AudioUnit -framework CoreServices \
+      $(PORTAUDIOLIBS)
+  SNOWBOYDETECTLIBFILE := $(TOPDIR)/lib/osx/libsnowboy-detect.a
+else ifeq ($(shell uname), Linux)
+  CXX := g++
+  CXXFLAGS += -I$(TOPDIR) -std=c++0x -Wall -Wno-sign-compare \
+      -Wno-unused-local-typedefs -Winit-self -rdynamic \
+      -DHAVE_POSIX_MEMALIGN -I$(PORTAUDIOINC)
+  LDLIBS += -ldl -lm -Wl,-Bstatic -Wl,-Bdynamic -lrt -lpthread $(PORTAUDIOLIBS)\
+      -L/usr/lib/atlas-base -lf77blas -lcblas -llapack_atlas -latlas -lasound
+  SNOWBOYDETECTLIBFILE := $(TOPDIR)/lib/ubuntu64/libsnowboy-detect.a
+  ifneq (,$(findstring arm,$(shell uname -m)))
+    SNOWBOYDETECTLIBFILE := $(TOPDIR)/lib/rpi/libsnowboy-detect.a
+  endif
+endif
+
+# Suppress clang warnings...
+COMPILER = $(shell $(CXX) -v 2>&1 )
+ifeq ($(findstring clang,$(COMPILER)), clang)
+  CXXFLAGS += -Wno-mismatched-tags -Wno-c++11-extensions
+endif
+
+# Set optimization level.
+CXXFLAGS += -O3

+ 146 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C++/demo2.cc

@@ -0,0 +1,146 @@
+#include <iostream>
+#include "include/snowboy-detect.h"
+#include "portaudio.h"
+
+#define resource_filename "resources/common.res"
+#define model_filename "resources/models/snowboy.umdl"
+#define sensitivity_str "0.5"
+
+struct wavHeader { //44 byte HEADER only
+  char  RIFF[4];
+  int   RIFFsize;
+  char  fmt[8];
+  int   fmtSize;
+  short fmtTag;
+  short nchan;
+  int   fs;
+  int   avgBps;
+  short nBlockAlign;
+  short bps;
+  char  data[4];
+  int   datasize;
+};
+
+
+void readWavHeader(wavHeader *wavhdr, FILE *fi) {
+  //=====================================================
+  // Reads the WAV file header considering the follow restrictions:
+  // - format tag needs to be 1=PCM (no encoding)
+  // - <data chunk> shoud be imidiately before the databytes
+  // (it should not contain chunks after 'data')
+  // Returns a pointer pointing to the begining of the data
+
+  char *tag = (char *)wavhdr;
+  fread(wavhdr, 34, 1, fi); //starting tag should be "RIFF"
+  if (tag[0] != 'R' || tag[1] != 'I' || tag[2] != 'F' || tag[3] != 'F') {
+    fclose(fi);
+    perror("NO 'RIFF'.");
+  }
+  if (wavhdr->fmtTag != 1) {
+    fclose(fi);
+    perror("WAV file has encoded data or it is WAVEFORMATEXTENSIBLE.");
+  }
+  if (wavhdr->fmtSize == 14) {
+    wavhdr->bps = 16;
+  }
+  if (wavhdr->fmtSize >= 16) {
+    fread(&wavhdr->bps, 2, 1, fi);
+  }
+  if (wavhdr->fmtSize == 18) {
+    short lixo;
+    fread(&lixo, 2, 1, fi);
+  }
+  tag += 36; //aponta para wavhdr->data
+  fread(tag, 4, 1, fi); //data chunk deve estar aqui.
+  while (tag[0] != 'd' || tag[1] != 'a' || tag[2] != 't' || tag[3] != 'a') {
+    fread(tag, 4, 1, fi);
+    if (ftell(fi) >= long(wavhdr->RIFFsize)) {
+      fclose(fi);
+      perror("Bad WAV header !");
+    }
+  }
+  fread(&wavhdr->datasize, 4, 1, fi); //data size
+  // Assuming that header ends here.
+  // From here until the end it is audio data
+}
+
+
+
+int main(int argc, char * argv[]) {
+  std::string usage =
+      "C++ demo that shows how to use snowboy. In this examle user can read\n"
+      "the audio data from a file.\n"
+      "\n"
+      "Atention reading from a file: this software is for simulation/test\n"
+      "only. You need to take precautions when loading a file into the\n"
+      "memory.\n"
+      "\n"
+      "To run the example:\n"
+      "  ./demo2 [filename.raw || filename.wav ]\n"
+      "\n"
+      "IMPORTANT NOTE: Raw file must be 16kHz sample, mono and 16bit\n";
+
+  // default
+  char * filename;
+  int fsize;
+  short * data_buffer = NULL;
+  bool isRaw = true;
+  FILE *f = NULL;
+
+  if (argc > 2 or argc < 2) {
+    std::cout << usage << std::endl;
+    exit(1);
+  } else {
+    filename = argv[1];
+  }
+
+  std::string str = filename;
+  std::string type = ".wav";
+
+  if (str.find(type) != std::string::npos) {
+    isRaw = false;
+  }
+
+
+  if (filename != NULL) {
+    f = fopen(filename,"rb");
+  }
+
+  if (f == NULL) {
+    perror ("Error opening file");
+    return(-1);
+  }
+
+  if (!isRaw) {
+    wavHeader *wavhdr = new wavHeader();
+    readWavHeader(wavhdr, f);
+
+    data_buffer = (short *)malloc(wavhdr->datasize);
+    // Consume all the audio to the buffer
+    fread(data_buffer, wavhdr->datasize, 1, f);
+    fclose(f);
+    fsize = wavhdr->datasize;
+  } else {
+    fseek(f,0,SEEK_END);
+    fsize = ftell(f);
+    rewind(f);
+
+    // Consume all the audio to the buffer
+    data_buffer = (short *)malloc(fsize);
+    int aa = fread(&data_buffer[0], 1 ,fsize, f);
+    std::cout << "Read bytes: " << aa << std::endl;
+    fclose(f);
+
+  }
+
+  // Initializes Snowboy detector.
+  snowboy::SnowboyDetect detector(resource_filename, model_filename);
+  detector.SetSensitivity(sensitivity_str);
+
+  int result = detector.RunDetection(&data_buffer[0], fsize/sizeof(short));
+  std::cout << ">>>>> Result: " << result << " <<<<<" << std::endl;
+  std::cout << "Legend: -2: noise | -1: error | 0: silence | 1: hotword"
+      << std::endl;
+
+  return 0;
+}

+ 36 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C++/install_portaudio.sh

@@ -0,0 +1,36 @@
+#!/bin/bash
+
+# This script attempts to install PortAudio, which can grap a live audio stream
+# from the soundcard.
+#
+# On linux systems, we only build with ALSA, so make sure you install it using
+# e.g.:
+#   sudo apt-get -y install libasound2-dev
+
+echo "Installing portaudio"
+
+if [ ! -e pa_stable_v190600_20161030.tgz ]; then
+  wget -T 10 -t 3 \
+    http://www.portaudio.com/archives/pa_stable_v190600_20161030.tgz || exit 1;
+fi
+
+tar -xovzf pa_stable_v190600_20161030.tgz || exit 1
+
+cd portaudio
+patch < ../patches/portaudio.patch
+
+MACOS=`uname 2>/dev/null | grep Darwin`
+if [ -z "$MACOS" ]; then
+  ./configure --without-jack --without-oss \
+    --with-alsa --prefix=`pwd`/install --with-pic || exit 1;
+  sed -i '40s:src/common/pa_ringbuffer.o::g' Makefile
+  sed -i '40s:$: src/common/pa_ringbuffer.o:' Makefile
+else
+  # People may have changed OSX's default configuration -- we use clang++.
+  CC=clang CXX=clang++ ./configure --prefix=`pwd`/install --with-pic
+fi
+
+make
+make install
+
+cd ..

+ 11 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C++/patches/portaudio.patch

@@ -0,0 +1,11 @@
+--- Makefile.in	2017-05-31 16:42:16.000000000 -0700
++++ Makefile_new.in	2017-05-31 16:44:02.000000000 -0700
+@@ -193,6 +193,8 @@
+ 	for include in $(INCLUDES); do \
+ 		$(INSTALL_DATA) -m 644 $(top_srcdir)/include/$$include $(DESTDIR)$(includedir)/$$include; \
+ 	done
++	$(INSTALL_DATA) -m 644 $(top_srcdir)/src/common/pa_ringbuffer.h $(DESTDIR)$(includedir)/$$include
++	$(INSTALL_DATA) -m 644 $(top_srcdir)/src/common/pa_util.h $(DESTDIR)$(includedir)/$$include
+ 	$(INSTALL) -d $(DESTDIR)$(libdir)/pkgconfig
+ 	$(INSTALL) -m 644 portaudio-2.0.pc $(DESTDIR)$(libdir)/pkgconfig/portaudio-2.0.pc
+ 	@echo ""

+ 1 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C++/resources

@@ -0,0 +1 @@
+../../resources/

+ 221 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C/demo.c

@@ -0,0 +1,221 @@
+// example/C/demo.c
+
+// Copyright 2017  KITT.AI (author: Guoguo Chen)
+
+#include <assert.h>
+#include <pa_ringbuffer.h>
+#include <pa_util.h>
+#include <portaudio.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <signal.h>
+
+#include "snowboy-detect-c-wrapper.h"
+
+// Pointer to the ring buffer memory.
+char* g_ringbuffer;
+// Ring buffer wrapper used in PortAudio.
+PaUtilRingBuffer g_pa_ringbuffer;
+// Pointer to PortAudio stream.
+PaStream* g_pa_stream;
+// Number of lost samples at each LoadAudioData() due to ring buffer overflow.
+int g_num_lost_samples;
+// Wait for this number of samples in each LoadAudioData() call.
+int g_min_read_samples;
+// Pointer to the audio data.
+int16_t* g_data;
+
+int PortAudioCallback(const void* input,
+                      void* output,
+                      unsigned long frame_count,
+                      const PaStreamCallbackTimeInfo* time_info,
+                      PaStreamCallbackFlags status_flags,
+                      void* user_data) {
+  ring_buffer_size_t num_written_samples =
+      PaUtil_WriteRingBuffer(&g_pa_ringbuffer, input, frame_count);
+  g_num_lost_samples += frame_count - num_written_samples;
+  return paContinue;
+}
+
+void StartAudioCapturing(int sample_rate,
+                         int num_channels, int bits_per_sample) {
+  g_data = NULL;
+  g_num_lost_samples = 0;
+  g_min_read_samples = sample_rate * 0.1;
+
+  // Allocates ring buffer memory.
+  int ringbuffer_size = 16384;
+  g_ringbuffer = (char*)(
+      PaUtil_AllocateMemory(bits_per_sample / 8 * ringbuffer_size));
+  if (g_ringbuffer == NULL) {
+    fprintf(stderr, "Fail to allocate memory for ring buffer.\n");
+    exit(1);
+  }
+
+  // Initializes PortAudio ring buffer.
+  ring_buffer_size_t rb_init_ans =
+      PaUtil_InitializeRingBuffer(&g_pa_ringbuffer, bits_per_sample / 8,
+                                  ringbuffer_size, g_ringbuffer);
+  if (rb_init_ans == -1) {
+    fprintf(stderr, "Ring buffer size is not power of 2.\n");
+    exit(1);
+  }
+
+  // Initializes PortAudio.
+  PaError pa_init_ans = Pa_Initialize();
+  if (pa_init_ans != paNoError) {
+    fprintf(stderr, "Fail to initialize PortAudio, error message is %s.\n",
+           Pa_GetErrorText(pa_init_ans));
+    exit(1);
+  }
+
+  PaError pa_open_ans;
+  if (bits_per_sample == 8) {
+    pa_open_ans = Pa_OpenDefaultStream(
+        &g_pa_stream, num_channels, 0, paUInt8, sample_rate,
+        paFramesPerBufferUnspecified, PortAudioCallback, NULL);
+  } else if (bits_per_sample == 16) {
+    pa_open_ans = Pa_OpenDefaultStream(
+        &g_pa_stream, num_channels, 0, paInt16, sample_rate,
+        paFramesPerBufferUnspecified, PortAudioCallback, NULL);
+  } else if (bits_per_sample == 32) {
+    pa_open_ans = Pa_OpenDefaultStream(
+        &g_pa_stream, num_channels, 0, paInt32, sample_rate,
+        paFramesPerBufferUnspecified, PortAudioCallback, NULL);
+  } else {
+    fprintf(stderr, "Unsupported BitsPerSample: %d.\n", bits_per_sample);
+    exit(1);
+  }
+  if (pa_open_ans != paNoError) {
+    fprintf(stderr, "Fail to open PortAudio stream, error message is %s.\n",
+           Pa_GetErrorText(pa_open_ans));
+    exit(1);
+  }
+
+  PaError pa_stream_start_ans = Pa_StartStream(g_pa_stream);
+  if (pa_stream_start_ans != paNoError) {
+    fprintf(stderr, "Fail to start PortAudio stream, error message is %s.\n",
+           Pa_GetErrorText(pa_stream_start_ans));
+    exit(1);
+  }
+}
+
+void StopAudioCapturing() {
+  if (g_data != NULL) {
+    free(g_data);
+    g_data = NULL;
+  }
+  Pa_StopStream(g_pa_stream);
+  Pa_CloseStream(g_pa_stream);
+  Pa_Terminate();
+  PaUtil_FreeMemory(g_ringbuffer);
+}
+
+int LoadAudioData() {
+  if (g_data != NULL) {
+    free(g_data);
+    g_data = NULL;
+  }
+
+  // Checks ring buffer overflow.
+  if (g_num_lost_samples > 0) {
+    fprintf(stderr, "Lost %d samples due to ring buffer overflow.\n",
+            g_num_lost_samples);
+    g_num_lost_samples = 0;
+  }
+
+  ring_buffer_size_t num_available_samples = 0;
+  while (true) {
+    num_available_samples =
+        PaUtil_GetRingBufferReadAvailable(&g_pa_ringbuffer);
+    if (num_available_samples >= g_min_read_samples) {
+      break;
+    }
+    Pa_Sleep(5);
+  }
+
+  // Reads data.
+  num_available_samples = PaUtil_GetRingBufferReadAvailable(&g_pa_ringbuffer);
+  g_data = malloc(num_available_samples * sizeof(int16_t));
+  ring_buffer_size_t num_read_samples = PaUtil_ReadRingBuffer(
+      &g_pa_ringbuffer, g_data, num_available_samples);
+  if (num_read_samples != num_available_samples) {
+    fprintf(stderr, "%d samples were available, but only %d samples were read"
+            ".\n", num_available_samples, num_read_samples);
+  }
+  return num_read_samples;
+}
+
+void SignalHandler(int signal) {
+  fprintf(stderr, "Caught signal %d, terminating...\n", signal);
+  exit(0);
+}
+
+int main(int argc, char* argv[]) {
+  const char usage[] =
+      "Example that shows how to use Snowboy in pure C. Snowboy was written\n"
+      "in C++, so we have to write a wrapper in order to use Snowboy in pure\n"
+      "C. See snowboy-detect-c-wrapper.h and snowboy-detect-c-wrapper.cc for\n"
+      "more details.\n"
+      "\n"
+      "Parameters are hard-coded in the parameter section for this example.\n"
+      "Please check the source code for more details.\n"
+      "\n"
+      "Audio is captured by PortAudio, feel free to replace PortAudio with\n"
+      "your own audio capturing tool.\n"
+      "\n"
+      "To run the example:\n"
+      "  ./demo\n";
+
+  // Checks the command.
+  if (argc > 1) {
+    printf("%s", usage);
+    exit(1);
+  }
+
+  // Configures signal handling.
+  struct sigaction sig_int_handler;
+  sig_int_handler.sa_handler = SignalHandler;
+  sigemptyset(&sig_int_handler.sa_mask);
+  sig_int_handler.sa_flags = 0;
+  sigaction(SIGINT, &sig_int_handler, NULL);
+
+  // Parameter section.
+  // If you have multiple hotword models (e.g., 2), you should set
+  // <model_filename> and <sensitivity_str> as follows:
+  //   model_filename =
+  //     "resources/models/snowboy.umdl,resources/models/smart_mirror.umdl";
+  //   sensitivity_str = "0.5,0.5";
+  const char resource_filename[] = "resources/common.res";
+  const char model_filename[] = "resources/models/snowboy.umdl";
+  const char sensitivity_str[] = "0.5";
+  float audio_gain = 1;
+
+  // Initializes Snowboy detector.
+  SnowboyDetect* detector = SnowboyDetectConstructor(resource_filename,
+                                                     model_filename);
+  SnowboyDetectSetSensitivity(detector, sensitivity_str);
+  SnowboyDetectSetAudioGain(detector, audio_gain);
+
+  // Initializes PortAudio. You may use other tools to capture the audio.
+  StartAudioCapturing(SnowboyDetectSampleRate(detector),
+                      SnowboyDetectNumChannels(detector),
+                      SnowboyDetectBitsPerSample(detector));
+
+  // Runs the detection.
+  printf("Listening... Press Ctrl+C to exit\n");
+  while (true) {
+    int array_length = LoadAudioData();
+    if (array_length != 0) {
+      int result = SnowboyDetectRunDetection(detector,
+                                             g_data, array_length, false);
+      if (result > 0) {
+        printf("Hotword %d detected!\n", result);
+      }
+    }
+  }
+
+  StopAudioCapturing();
+  SnowboyDetectDestructor(detector);
+  return 0;
+}

+ 58 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C/demo.mk

@@ -0,0 +1,58 @@
+TOPDIR := ../../
+DYNAMIC := True
+CC :=
+CXX :=
+LDFLAGS :=
+LDLIBS :=
+PORTAUDIOINC := portaudio/install/include
+PORTAUDIOLIBS := portaudio/install/lib/libportaudio.a
+
+CFLAGS :=
+CXXFLAGS += -D_GLIBCXX_USE_CXX11_ABI=0
+
+ifeq ($(DYNAMIC), True)
+  CFLAGS += -fPIC
+  CXXFLAGS += -fPIC
+endif
+
+ifeq ($(shell uname -m | cut -c 1-3), x86)
+  CFLAGS += -msse -msse2
+  CXXFLAGS += -msse -msse2
+endif
+
+ifeq ($(shell uname), Darwin)
+  # By default Mac uses clang++ as g++, but people may have changed their
+  # default configuration.
+  CC := clang
+  CXX := clang++
+  CFLAGS += -I$(TOPDIR) -Wall -I$(PORTAUDIOINC)
+  CXXFLAGS += -I$(TOPDIR) -Wall -Wno-sign-compare -Winit-self \
+      -DHAVE_POSIX_MEMALIGN -DHAVE_CLAPACK -I$(PORTAUDIOINC)
+  LDLIBS += -ldl -lm -framework Accelerate -framework CoreAudio \
+      -framework AudioToolbox -framework AudioUnit -framework CoreServices \
+      $(PORTAUDIOLIBS)
+  SNOWBOYDETECTLIBFILE := $(TOPDIR)/lib/osx/libsnowboy-detect.a
+else ifeq ($(shell uname), Linux)
+  CC := gcc
+  CXX := g++
+  CFLAGS += -I$(TOPDIR) -Wall -I$(PORTAUDIOINC)
+  CXXFLAGS += -I$(TOPDIR) -std=c++0x -Wall -Wno-sign-compare \
+      -Wno-unused-local-typedefs -Winit-self -rdynamic \
+      -DHAVE_POSIX_MEMALIGN -I$(PORTAUDIOINC)
+  LDLIBS += -ldl -lm -Wl,-Bstatic -Wl,-Bdynamic -lrt -lpthread $(PORTAUDIOLIBS)\
+      -L/usr/lib/atlas-base -lf77blas -lcblas -llapack_atlas -latlas -lasound
+  SNOWBOYDETECTLIBFILE := $(TOPDIR)/lib/ubuntu64/libsnowboy-detect.a
+  ifneq (,$(findstring arm,$(shell uname -m)))
+    SNOWBOYDETECTLIBFILE := $(TOPDIR)/lib/rpi/libsnowboy-detect.a
+  endif
+endif
+
+# Suppress clang warnings...
+COMPILER = $(shell $(CXX) -v 2>&1 )
+ifeq ($(findstring clang,$(COMPILER)), clang)
+  CXXFLAGS += -Wno-mismatched-tags -Wno-c++11-extensions
+endif
+
+# Set optimization level.
+CFLAGS += -O3
+CXXFLAGS += -O3

+ 36 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C/install_portaudio.sh

@@ -0,0 +1,36 @@
+#!/bin/bash
+
+# This script attempts to install PortAudio, which can grap a live audio stream
+# from the soundcard.
+#
+# On linux systems, we only build with ALSA, so make sure you install it using
+# e.g.:
+#   sudo apt-get -y install libasound2-dev
+
+echo "Installing portaudio"
+
+if [ ! -e pa_stable_v190600_20161030.tgz ]; then
+  wget -T 10 -t 3 \
+    http://www.portaudio.com/archives/pa_stable_v190600_20161030.tgz || exit 1;
+fi
+
+tar -xovzf pa_stable_v190600_20161030.tgz || exit 1
+
+cd portaudio
+patch < ../patches/portaudio.patch
+
+MACOS=`uname 2>/dev/null | grep Darwin`
+if [ -z "$MACOS" ]; then
+  ./configure --without-jack --without-oss \
+    --with-alsa --prefix=`pwd`/install --with-pic || exit 1;
+  sed -i '40s:src/common/pa_ringbuffer.o::g' Makefile
+  sed -i '40s:$: src/common/pa_ringbuffer.o:' Makefile
+else
+  # People may have changed OSX's default configuration -- we use clang++.
+  CC=clang CXX=clang++ ./configure --prefix=`pwd`/install --with-pic
+fi
+
+make
+make install
+
+cd ..

+ 11 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C/patches/portaudio.patch

@@ -0,0 +1,11 @@
+--- Makefile.in	2017-05-31 16:42:16.000000000 -0700
++++ Makefile_new.in	2017-05-31 16:44:02.000000000 -0700
+@@ -193,6 +193,8 @@
+ 	for include in $(INCLUDES); do \
+ 		$(INSTALL_DATA) -m 644 $(top_srcdir)/include/$$include $(DESTDIR)$(includedir)/$$include; \
+ 	done
++	$(INSTALL_DATA) -m 644 $(top_srcdir)/src/common/pa_ringbuffer.h $(DESTDIR)$(includedir)/$$include
++	$(INSTALL_DATA) -m 644 $(top_srcdir)/src/common/pa_util.h $(DESTDIR)$(includedir)/$$include
+ 	$(INSTALL) -d $(DESTDIR)$(libdir)/pkgconfig
+ 	$(INSTALL) -m 644 portaudio-2.0.pc $(DESTDIR)$(libdir)/pkgconfig/portaudio-2.0.pc
+ 	@echo ""

+ 1 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C/resources

@@ -0,0 +1 @@
+../../resources

+ 82 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C/snowboy-detect-c-wrapper.cc

@@ -0,0 +1,82 @@
+// snowboy-detect-c-wrapper.cc
+
+// Copyright 2017  KITT.AI (author: Guoguo Chen)
+
+#include <assert.h>
+
+#include "snowboy-detect-c-wrapper.h"
+#include "include/snowboy-detect.h"
+
+extern "C" {
+  SnowboyDetect* SnowboyDetectConstructor(const char* const resource_filename,
+                                          const char* const model_str) {
+    return reinterpret_cast<SnowboyDetect*>(
+        new snowboy::SnowboyDetect(resource_filename, model_str));
+  }
+
+  bool SnowboyDetectReset(SnowboyDetect* detector) {
+    assert(detector != NULL);
+    return reinterpret_cast<snowboy::SnowboyDetect*>(detector)->Reset();
+  }
+
+  int SnowboyDetectRunDetection(SnowboyDetect* detector,
+                                const int16_t* const data,
+                                const int array_length, bool is_end) {
+    assert(detector != NULL);
+    assert(data != NULL);
+    return reinterpret_cast<snowboy::SnowboyDetect*>(
+        detector)->RunDetection(data, array_length, is_end);
+  }
+
+  void SnowboyDetectSetSensitivity(SnowboyDetect* detector,
+                                   const char* const sensitivity_str) {
+    assert(detector != NULL);
+    reinterpret_cast<snowboy::SnowboyDetect*>(
+        detector)->SetSensitivity(sensitivity_str);
+  }
+
+  void SnowboyDetectSetAudioGain(SnowboyDetect* detector,
+                                 const float audio_gain) {
+    assert(detector != NULL);
+    reinterpret_cast<snowboy::SnowboyDetect*>(
+        detector)->SetAudioGain(audio_gain);
+  }
+
+  void SnowboyDetectUpdateModel(SnowboyDetect* detector) {
+    assert(detector != NULL);
+    reinterpret_cast<snowboy::SnowboyDetect*>(detector)->UpdateModel();
+  }
+
+  void SnowboyDetectApplyFrontend(SnowboyDetect* detector,
+                                  const bool apply_frontend) {
+    assert(detector != NULL);
+    reinterpret_cast<snowboy::SnowboyDetect*>(
+        detector)->ApplyFrontend(apply_frontend);
+  }
+
+  int SnowboyDetectNumHotwords(SnowboyDetect* detector) {
+    assert(detector != NULL);
+    return reinterpret_cast<snowboy::SnowboyDetect*>(detector)->NumHotwords();
+  }
+
+  int SnowboyDetectSampleRate(SnowboyDetect* detector) {
+    assert(detector != NULL);
+    return reinterpret_cast<snowboy::SnowboyDetect*>(detector)->SampleRate();
+  }
+
+  int SnowboyDetectNumChannels(SnowboyDetect* detector) {
+    assert(detector != NULL);
+    return reinterpret_cast<snowboy::SnowboyDetect*>(detector)->NumChannels();
+  }
+
+  int SnowboyDetectBitsPerSample(SnowboyDetect* detector) {
+    assert(detector != NULL);
+    return reinterpret_cast<snowboy::SnowboyDetect*>(detector)->BitsPerSample();
+  }
+
+  void SnowboyDetectDestructor(SnowboyDetect* detector) {
+    assert(detector != NULL);
+    delete reinterpret_cast<snowboy::SnowboyDetect*>(detector);
+    detector = NULL;
+  }
+}

+ 0 - 0
include/snowboy/include/snowboy-detect-c-wrapper.h → catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/C/snowboy-detect-c-wrapper.h


+ 0 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/__init__.py


+ 35 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/demo.py

@@ -0,0 +1,35 @@
+import snowboydecoder
+import sys
+import signal
+
+interrupted = False
+
+
+def signal_handler(signal, frame):
+    global interrupted
+    interrupted = True
+
+
+def interrupt_callback():
+    global interrupted
+    return interrupted
+
+if len(sys.argv) == 1:
+    print("Error: need to specify model name")
+    print("Usage: python demo.py your.model")
+    sys.exit(-1)
+
+model = sys.argv[1]
+
+# capture SIGINT signal, e.g., Ctrl+C
+signal.signal(signal.SIGINT, signal_handler)
+
+detector = snowboydecoder.HotwordDetector(model, sensitivity=0.5)
+print('Listening... Press Ctrl+C to exit')
+
+# main loop
+detector.start(detected_callback=snowboydecoder.play_audio_file,
+               interrupt_check=interrupt_callback,
+               sleep_time=0.03)
+
+detector.terminate()

+ 41 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/demo2.py

@@ -0,0 +1,41 @@
+import snowboydecoder
+import sys
+import signal
+
+# Demo code for listening to two hotwords at the same time
+
+interrupted = False
+
+
+def signal_handler(signal, frame):
+    global interrupted
+    interrupted = True
+
+
+def interrupt_callback():
+    global interrupted
+    return interrupted
+
+if len(sys.argv) != 3:
+    print("Error: need to specify 2 model names")
+    print("Usage: python demo.py 1st.model 2nd.model")
+    sys.exit(-1)
+
+models = sys.argv[1:]
+
+# capture SIGINT signal, e.g., Ctrl+C
+signal.signal(signal.SIGINT, signal_handler)
+
+sensitivity = [0.5]*len(models)
+detector = snowboydecoder.HotwordDetector(models, sensitivity=sensitivity)
+callbacks = [lambda: snowboydecoder.play_audio_file(snowboydecoder.DETECT_DING),
+             lambda: snowboydecoder.play_audio_file(snowboydecoder.DETECT_DONG)]
+print('Listening... Press Ctrl+C to exit')
+
+# main loop
+# make sure you have the same numbers of callbacks and models
+detector.start(detected_callback=callbacks,
+               interrupt_check=interrupt_callback,
+               sleep_time=0.03)
+
+detector.terminate()

+ 40 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/demo3.py

@@ -0,0 +1,40 @@
+import snowboydecoder
+import sys
+import wave
+
+# Demo code for detecting hotword in a .wav file
+# Example Usage:
+#  $ python demo3.py resources/snowboy.wav resources/models/snowboy.umdl
+# Should print:
+#  Hotword Detected!
+#
+#  $ python demo3.py resources/ding.wav resources/models/snowboy.umdl
+# Should print:
+#  Hotword Not Detected!
+
+
+if len(sys.argv) != 3:
+    print("Error: need to specify wave file name and model name")
+    print("Usage: python demo3.py wave_file model_file")
+    sys.exit(-1)
+
+wave_file = sys.argv[1]
+model_file = sys.argv[2]
+
+f = wave.open(wave_file)
+assert f.getnchannels() == 1, "Error: Snowboy only supports 1 channel of audio (mono, not stereo)"
+assert f.getframerate() == 16000, "Error: Snowboy only supports 16K sampling rate"
+assert f.getsampwidth() == 2, "Error: Snowboy only supports 16bit per sample"
+data = f.readframes(f.getnframes())
+f.close()
+
+sensitivity = 0.5
+detection = snowboydecoder.HotwordDetector(model_file, sensitivity=sensitivity)
+
+ans = detection.detector.RunDetection(data)
+
+if ans == 1:
+    print('Hotword Detected!')
+else:
+    print('Hotword Not Detected!')
+

+ 76 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/demo4.py

@@ -0,0 +1,76 @@
+import snowboydecoder
+import sys
+import signal
+import speech_recognition as sr
+import os
+
+"""
+This demo file shows you how to use the new_message_callback to interact with
+the recorded audio after a keyword is spoken. It uses the speech recognition
+library in order to convert the recorded audio into text.
+
+Information on installing the speech recognition library can be found at:
+https://pypi.python.org/pypi/SpeechRecognition/
+"""
+
+
+interrupted = False
+
+
+def audioRecorderCallback(fname):
+    print "converting audio to text"
+    r = sr.Recognizer()
+    with sr.AudioFile(fname) as source:
+        audio = r.record(source)  # read the entire audio file
+    # recognize speech using Google Speech Recognition
+    try:
+        # for testing purposes, we're just using the default API key
+        # to use another API key, use `r.recognize_google(audio, key="GOOGLE_SPEECH_RECOGNITION_API_KEY")`
+        # instead of `r.recognize_google(audio)`
+        print(r.recognize_google(audio))
+    except sr.UnknownValueError:
+        print "Google Speech Recognition could not understand audio"
+    except sr.RequestError as e:
+        print "Could not request results from Google Speech Recognition service; {0}".format(e)
+
+    os.remove(fname)
+
+
+
+def detectedCallback():
+  sys.stdout.write("recording audio...")
+  sys.stdout.flush()
+
+def signal_handler(signal, frame):
+    global interrupted
+    interrupted = True
+
+
+def interrupt_callback():
+    global interrupted
+    return interrupted
+
+if len(sys.argv) == 1:
+    print "Error: need to specify model name"
+    print "Usage: python demo.py your.model"
+    sys.exit(-1)
+
+model = sys.argv[1]
+
+# capture SIGINT signal, e.g., Ctrl+C
+signal.signal(signal.SIGINT, signal_handler)
+
+detector = snowboydecoder.HotwordDetector(model, sensitivity=0.38)
+print "Listening... Press Ctrl+C to exit"
+
+# main loop
+detector.start(detected_callback=detectedCallback,
+               audio_recorder_callback=audioRecorderCallback,
+               interrupt_check=interrupt_callback,
+               sleep_time=0.01)
+
+detector.terminate()
+
+
+
+

+ 35 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/demo_arecord.py

@@ -0,0 +1,35 @@
+import snowboydecoder_arecord
+import sys
+import signal
+
+interrupted = False
+
+
+def signal_handler(signal, frame):
+    global interrupted
+    interrupted = True
+
+
+def interrupt_callback():
+    global interrupted
+    return interrupted
+
+if len(sys.argv) == 1:
+    print("Error: need to specify model name")
+    print("Usage: python demo.py your.model")
+    sys.exit(-1)
+
+model = sys.argv[1]
+
+# capture SIGINT signal, e.g., Ctrl+C
+signal.signal(signal.SIGINT, signal_handler)
+
+detector = snowboydecoder_arecord.HotwordDetector(model, sensitivity=0.5)
+print('Listening... Press Ctrl+C to exit')
+
+# main loop
+detector.start(detected_callback=snowboydecoder_arecord.play_audio_file,
+               interrupt_check=interrupt_callback,
+               sleep_time=0.03)
+
+detector.terminate()

+ 47 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/demo_threaded.py

@@ -0,0 +1,47 @@
+import snowboythreaded
+import sys
+import signal
+import time
+
+stop_program = False
+
+# This a demo that shows running Snowboy in another thread
+
+
+def signal_handler(signal, frame):
+    global stop_program
+    stop_program = True
+
+
+if len(sys.argv) == 1:
+    print("Error: need to specify model name")
+    print("Usage: python demo4.py your.model")
+    sys.exit(-1)
+
+model = sys.argv[1]
+
+# capture SIGINT signal, e.g., Ctrl+C
+signal.signal(signal.SIGINT, signal_handler)
+
+# Initialize ThreadedDetector object and start the detection thread
+threaded_detector = snowboythreaded.ThreadedDetector(model, sensitivity=0.5)
+threaded_detector.start()
+
+print('Listening... Press Ctrl+C to exit')
+
+# main loop
+threaded_detector.start_recog(sleep_time=0.03)
+
+# Let audio initialization happen before requesting input
+time.sleep(1)
+
+# Do a simple task separate from the detection - addition of numbers
+while not stop_program:
+    try:
+        num1 = int(raw_input("Enter the first number to add: "))
+        num2 = int(raw_input("Enter the second number to add: "))
+        print "Sum of number: {}".format(num1 + num2)
+    except ValueError:
+        print "You did not enter a number."
+
+threaded_detector.terminate()

+ 1 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/requirements.txt

@@ -0,0 +1 @@
+PyAudio==0.2.9

+ 1 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/resources

@@ -0,0 +1 @@
+../../resources/

+ 248 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/snowboydecoder.py

@@ -0,0 +1,248 @@
+#!/usr/bin/env python
+
+import collections
+import pyaudio
+import snowboydetect
+import time
+import wave
+import os
+import logging
+
+logging.basicConfig()
+logger = logging.getLogger("snowboy")
+logger.setLevel(logging.INFO)
+TOP_DIR = os.path.dirname(os.path.abspath(__file__))
+
+RESOURCE_FILE = os.path.join(TOP_DIR, "resources/common.res")
+DETECT_DING = os.path.join(TOP_DIR, "resources/ding.wav")
+DETECT_DONG = os.path.join(TOP_DIR, "resources/dong.wav")
+
+
+class RingBuffer(object):
+    """Ring buffer to hold audio from PortAudio"""
+    def __init__(self, size = 4096):
+        self._buf = collections.deque(maxlen=size)
+
+    def extend(self, data):
+        """Adds data to the end of buffer"""
+        self._buf.extend(data)
+
+    def get(self):
+        """Retrieves data from the beginning of buffer and clears it"""
+        tmp = bytes(bytearray(self._buf))
+        self._buf.clear()
+        return tmp
+
+
+def play_audio_file(fname=DETECT_DING):
+    """Simple callback function to play a wave file. By default it plays
+    a Ding sound.
+
+    :param str fname: wave file name
+    :return: None
+    """
+    ding_wav = wave.open(fname, 'rb')
+    ding_data = ding_wav.readframes(ding_wav.getnframes())
+    audio = pyaudio.PyAudio()
+    stream_out = audio.open(
+        format=audio.get_format_from_width(ding_wav.getsampwidth()),
+        channels=ding_wav.getnchannels(),
+        rate=ding_wav.getframerate(), input=False, output=True)
+    stream_out.start_stream()
+    stream_out.write(ding_data)
+    time.sleep(0.2)
+    stream_out.stop_stream()
+    stream_out.close()
+    audio.terminate()
+
+
+class HotwordDetector(object):
+    """
+    Snowboy decoder to detect whether a keyword specified by `decoder_model`
+    exists in a microphone input stream.
+
+    :param decoder_model: decoder model file path, a string or a list of strings
+    :param resource: resource file path.
+    :param sensitivity: decoder sensitivity, a float of a list of floats.
+                              The bigger the value, the more senstive the
+                              decoder. If an empty list is provided, then the
+                              default sensitivity in the model will be used.
+    :param audio_gain: multiply input volume by this factor.
+    """
+    def __init__(self, decoder_model,
+                 resource=RESOURCE_FILE,
+                 sensitivity=[],
+                 audio_gain=1):
+
+        def audio_callback(in_data, frame_count, time_info, status):
+            self.ring_buffer.extend(in_data)
+            play_data = chr(0) * len(in_data)
+            return play_data, pyaudio.paContinue
+
+        tm = type(decoder_model)
+        ts = type(sensitivity)
+        if tm is not list:
+            decoder_model = [decoder_model]
+        if ts is not list:
+            sensitivity = [sensitivity]
+        model_str = ",".join(decoder_model)
+
+        self.detector = snowboydetect.SnowboyDetect(
+            resource_filename=resource.encode(), model_str=model_str.encode())
+        self.detector.SetAudioGain(audio_gain)
+        self.num_hotwords = self.detector.NumHotwords()
+
+        if len(decoder_model) > 1 and len(sensitivity) == 1:
+            sensitivity = sensitivity*self.num_hotwords
+        if len(sensitivity) != 0:
+            assert self.num_hotwords == len(sensitivity), \
+                "number of hotwords in decoder_model (%d) and sensitivity " \
+                "(%d) does not match" % (self.num_hotwords, len(sensitivity))
+        sensitivity_str = ",".join([str(t) for t in sensitivity])
+        if len(sensitivity) != 0:
+            self.detector.SetSensitivity(sensitivity_str.encode())
+
+        self.ring_buffer = RingBuffer(
+            self.detector.NumChannels() * self.detector.SampleRate() * 5)
+        self.audio = pyaudio.PyAudio()
+        self.stream_in = self.audio.open(
+            input=True, output=False,
+            format=self.audio.get_format_from_width(
+                self.detector.BitsPerSample() / 8),
+            channels=self.detector.NumChannels(),
+            rate=self.detector.SampleRate(),
+            frames_per_buffer=2048,
+            stream_callback=audio_callback)
+
+
+    def start(self, detected_callback=play_audio_file,
+              interrupt_check=lambda: False,
+              sleep_time=0.03,
+              audio_recorder_callback=None,
+              silent_count_threshold=15,
+              recording_timeout=100):
+        """
+        Start the voice detector. For every `sleep_time` second it checks the
+        audio buffer for triggering keywords. If detected, then call
+        corresponding function in `detected_callback`, which can be a single
+        function (single model) or a list of callback functions (multiple
+        models). Every loop it also calls `interrupt_check` -- if it returns
+        True, then breaks from the loop and return.
+
+        :param detected_callback: a function or list of functions. The number of
+                                  items must match the number of models in
+                                  `decoder_model`.
+        :param interrupt_check: a function that returns True if the main loop
+                                needs to stop.
+        :param float sleep_time: how much time in second every loop waits.
+        :param audio_recorder_callback: if specified, this will be called after
+                                        a keyword has been spoken and after the
+                                        phrase immediately after the keyword has
+                                        been recorded. The function will be
+                                        passed the name of the file where the
+                                        phrase was recorded.
+        :param silent_count_threshold: indicates how long silence must be heard
+                                       to mark the end of a phrase that is
+                                       being recorded.
+        :param recording_timeout: limits the maximum length of a recording.
+        :return: None
+        """
+        if interrupt_check():
+            logger.debug("detect voice return")
+            return
+
+        tc = type(detected_callback)
+        if tc is not list:
+            detected_callback = [detected_callback]
+        if len(detected_callback) == 1 and self.num_hotwords > 1:
+            detected_callback *= self.num_hotwords
+
+        assert self.num_hotwords == len(detected_callback), \
+            "Error: hotwords in your models (%d) do not match the number of " \
+            "callbacks (%d)" % (self.num_hotwords, len(detected_callback))
+
+        logger.debug("detecting...")
+
+        state = "PASSIVE"
+        while True:
+            if interrupt_check():
+                logger.debug("detect voice break")
+                break
+            data = self.ring_buffer.get()
+            if len(data) == 0:
+                time.sleep(sleep_time)
+                continue
+
+            status = self.detector.RunDetection(data)
+            if status == -1:
+                logger.warning("Error initializing streams or reading audio data")
+
+            #small state machine to handle recording of phrase after keyword
+            if state == "PASSIVE":
+                if status > 0: #key word found
+                    self.recordedData = []
+                    self.recordedData.append(data)
+                    silentCount = 0
+                    recordingCount = 0
+                    message = "Keyword " + str(status) + " detected at time: "
+                    message += time.strftime("%Y-%m-%d %H:%M:%S",
+                                         time.localtime(time.time()))
+                    logger.info(message)
+                    callback = detected_callback[status-1]
+                    if callback is not None:
+                        callback()
+
+                    if audio_recorder_callback is not None:
+                        state = "ACTIVE"
+                    continue
+
+            elif state == "ACTIVE":
+                stopRecording = False
+                if recordingCount > recording_timeout:
+                    stopRecording = True
+                elif status == -2: #silence found
+                    if silentCount > silent_count_threshold:
+                        stopRecording = True
+                    else:
+                        silentCount = silentCount + 1
+                elif status == 0: #voice found
+                    silentCount = 0
+
+                if stopRecording == True:
+                    fname = self.saveMessage()
+                    audio_recorder_callback(fname)
+                    state = "PASSIVE"
+                    continue
+
+                recordingCount = recordingCount + 1
+                self.recordedData.append(data)
+
+        logger.debug("finished.")
+
+    def saveMessage(self):
+        """
+        Save the message stored in self.recordedData to a timestamped file.
+        """
+        filename = 'output' + str(int(time.time())) + '.wav'
+        data = b''.join(self.recordedData)
+
+        #use wave to save data
+        wf = wave.open(filename, 'wb')
+        wf.setnchannels(1)
+        wf.setsampwidth(self.audio.get_sample_size(
+            self.audio.get_format_from_width(
+                self.detector.BitsPerSample() / 8)))
+        wf.setframerate(self.detector.SampleRate())
+        wf.writeframes(data)
+        wf.close()
+        logger.debug("finished saving: " + filename)
+        return filename
+
+    def terminate(self):
+        """
+        Terminate audio stream. Users cannot call start() again to detect.
+        :return: None
+        """
+        self.stream_in.stop_stream()
+        self.stream_in.close()
+        self.audio.terminate()

+ 181 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/snowboydecoder_arecord.py

@@ -0,0 +1,181 @@
+#!/usr/bin/env python
+
+import collections
+import snowboydetect
+import time
+import wave
+import os
+import logging
+import subprocess
+import threading
+
+logging.basicConfig()
+logger = logging.getLogger("snowboy")
+logger.setLevel(logging.INFO)
+TOP_DIR = os.path.dirname(os.path.abspath(__file__))
+
+RESOURCE_FILE = os.path.join(TOP_DIR, "resources/common.res")
+DETECT_DING = os.path.join(TOP_DIR, "resources/ding.wav")
+DETECT_DONG = os.path.join(TOP_DIR, "resources/dong.wav")
+
+
+class RingBuffer(object):
+    """Ring buffer to hold audio from audio capturing tool"""
+    def __init__(self, size = 4096):
+        self._buf = collections.deque(maxlen=size)
+
+    def extend(self, data):
+        """Adds data to the end of buffer"""
+        self._buf.extend(data)
+
+    def get(self):
+        """Retrieves data from the beginning of buffer and clears it"""
+        tmp = bytes(bytearray(self._buf))
+        self._buf.clear()
+        return tmp
+
+
+def play_audio_file(fname=DETECT_DING):
+    """Simple callback function to play a wave file. By default it plays
+    a Ding sound.
+
+    :param str fname: wave file name
+    :return: None
+    """
+    os.system("aplay " + fname + " > /dev/null 2>&1")
+
+
+class HotwordDetector(object):
+    """
+    Snowboy decoder to detect whether a keyword specified by `decoder_model`
+    exists in a microphone input stream.
+
+    :param decoder_model: decoder model file path, a string or a list of strings
+    :param resource: resource file path.
+    :param sensitivity: decoder sensitivity, a float of a list of floats.
+                              The bigger the value, the more senstive the
+                              decoder. If an empty list is provided, then the
+                              default sensitivity in the model will be used.
+    :param audio_gain: multiply input volume by this factor.
+    """
+    def __init__(self, decoder_model,
+                 resource=RESOURCE_FILE,
+                 sensitivity=[],
+                 audio_gain=1):
+
+        tm = type(decoder_model)
+        ts = type(sensitivity)
+        if tm is not list:
+            decoder_model = [decoder_model]
+        if ts is not list:
+            sensitivity = [sensitivity]
+        model_str = ",".join(decoder_model)
+
+        self.detector = snowboydetect.SnowboyDetect(
+            resource_filename=resource.encode(), model_str=model_str.encode())
+        self.detector.SetAudioGain(audio_gain)
+        self.num_hotwords = self.detector.NumHotwords()
+
+        if len(decoder_model) > 1 and len(sensitivity) == 1:
+            sensitivity = sensitivity*self.num_hotwords
+        if len(sensitivity) != 0:
+            assert self.num_hotwords == len(sensitivity), \
+                "number of hotwords in decoder_model (%d) and sensitivity " \
+                "(%d) does not match" % (self.num_hotwords, len(sensitivity))
+        sensitivity_str = ",".join([str(t) for t in sensitivity])
+        if len(sensitivity) != 0:
+            self.detector.SetSensitivity(sensitivity_str.encode())
+
+        self.ring_buffer = RingBuffer(
+            self.detector.NumChannels() * self.detector.SampleRate() * 5)
+
+    def record_proc(self):
+        CHUNK = 2048
+        RECORD_RATE = 16000
+        cmd = 'arecord -q -r %d -f S16_LE' % RECORD_RATE
+        process = subprocess.Popen(cmd.split(' '),
+                                   stdout = subprocess.PIPE,
+                                   stderr = subprocess.PIPE)
+        wav = wave.open(process.stdout, 'rb')
+        while self.recording:
+            data = wav.readframes(CHUNK)
+            self.ring_buffer.extend(data)
+        process.terminate()
+
+    def init_recording(self):
+        """
+        Start a thread for spawning arecord process and reading its stdout
+        """
+        self.recording = True
+        self.record_thread = threading.Thread(target = self.record_proc)
+        self.record_thread.start()
+
+    def start(self, detected_callback=play_audio_file,
+              interrupt_check=lambda: False,
+              sleep_time=0.03):
+        """
+        Start the voice detector. For every `sleep_time` second it checks the
+        audio buffer for triggering keywords. If detected, then call
+        corresponding function in `detected_callback`, which can be a single
+        function (single model) or a list of callback functions (multiple
+        models). Every loop it also calls `interrupt_check` -- if it returns
+        True, then breaks from the loop and return.
+
+        :param detected_callback: a function or list of functions. The number of
+                                  items must match the number of models in
+                                  `decoder_model`.
+        :param interrupt_check: a function that returns True if the main loop
+                                needs to stop.
+        :param float sleep_time: how much time in second every loop waits.
+        :return: None
+        """
+
+        self.init_recording()
+
+        if interrupt_check():
+            logger.debug("detect voice return")
+            return
+
+        tc = type(detected_callback)
+        if tc is not list:
+            detected_callback = [detected_callback]
+        if len(detected_callback) == 1 and self.num_hotwords > 1:
+            detected_callback *= self.num_hotwords
+
+        assert self.num_hotwords == len(detected_callback), \
+            "Error: hotwords in your models (%d) do not match the number of " \
+            "callbacks (%d)" % (self.num_hotwords, len(detected_callback))
+
+        logger.debug("detecting...")
+
+        while True:
+            if interrupt_check():
+                logger.debug("detect voice break")
+                break
+            data = self.ring_buffer.get()
+            if len(data) == 0:
+                time.sleep(sleep_time)
+                continue
+
+            ans = self.detector.RunDetection(data)
+            if ans == -1:
+                logger.warning("Error initializing streams or reading audio data")
+            elif ans > 0:
+                message = "Keyword " + str(ans) + " detected at time: "
+                message += time.strftime("%Y-%m-%d %H:%M:%S",
+                                         time.localtime(time.time()))
+                logger.info(message)
+                callback = detected_callback[ans-1]
+                if callback is not None:
+                    callback()
+
+        logger.debug("finished.")
+
+    def terminate(self):
+        """
+        Terminate audio stream. Users cannot call start() again to detect.
+        :return: None
+        """
+        self.recording = False
+        self.record_thread.join()
+

+ 96 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python/snowboythreaded.py

@@ -0,0 +1,96 @@
+import snowboydecoder
+import threading
+import Queue
+
+
+class ThreadedDetector(threading.Thread):
+    """
+    Wrapper class around detectors to run them in a separate thread
+    and provide methods to pause, resume, and modify detection
+    """
+
+    def __init__(self, models, **kwargs):
+        """
+        Initialize Detectors object. **kwargs is for any __init__ keyword
+        arguments to be passed into HotWordDetector __init__() method.
+        """
+        threading.Thread.__init__(self)
+        self.models = models
+        self.init_kwargs = kwargs
+        self.interrupted = True
+        self.commands = Queue.Queue()
+        self.vars_are_changed = True
+        self.detectors = None  # Initialize when thread is run in self.run()
+        self.run_kwargs = None  # Initialize when detectors start in self.start_recog()
+
+    def initialize_detectors(self):
+        """
+        Returns initialized Snowboy HotwordDetector objects
+        """
+        self.detectors = snowboydecoder.HotwordDetector(self.models, **self.init_kwargs)
+
+    def run(self):
+        """
+        Runs in separate thread - waits on command to either run detectors
+        or terminate thread from commands queue
+        """
+        try:
+            while True:
+                command = self.commands.get(True)
+                if command == "Start":
+                    self.interrupted = False
+                    if self.vars_are_changed:
+                        # If there is an existing detector object, terminate it
+                        if self.detectors is not None:
+                            self.detectors.terminate()
+                        self.initialize_detectors()
+                        self.vars_are_changed = False
+                    # Start detectors - blocks until interrupted by self.interrupted variable
+                    self.detectors.start(interrupt_check=lambda: self.interrupted, **self.run_kwargs)
+                elif command == "Terminate":
+                    # Program ending - terminate thread
+                    break
+        finally:
+            if self.detectors is not None:
+                self.detectors.terminate()
+
+    def start_recog(self, **kwargs):
+        """
+        Starts recognition in thread. Accepts kwargs to pass into the
+        HotWordDetector.start() method, but does not accept interrupt_callback,
+        as that is already set up.
+        """
+        assert "interrupt_check" not in kwargs, \
+            "Cannot set interrupt_check argument. To interrupt detectors, use Detectors.pause_recog() instead"
+        self.run_kwargs = kwargs
+        self.commands.put("Start")
+
+    def pause_recog(self):
+        """
+        Halts recognition in thread.
+        """
+        self.interrupted = True
+
+    def terminate(self):
+        """
+        Terminates recognition thread - called when program terminates
+        """
+        self.pause_recog()
+        self.commands.put("Terminate")
+
+    def is_running(self):
+        return not self.interrupted
+
+    def change_models(self, models):
+        if self.is_running():
+            print("Models will be changed after restarting detectors.")
+        if self.models != models:
+            self.models = models
+            self.vars_are_changed = True
+
+    def change_sensitivity(self, sensitivity):
+        if self.is_running():
+            print("Sensitivity will be changed after restarting detectors.")
+        if self.init_kwargs['sensitivity'] != sensitivity:
+            self.init_kwargs['sensitivity'] = sensitivity
+            self.vars_are_changed = True

+ 35 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python3/demo.py

@@ -0,0 +1,35 @@
+import snowboydecoder
+import sys
+import signal
+
+interrupted = False
+
+
+def signal_handler(signal, frame):
+    global interrupted
+    interrupted = True
+
+
+def interrupt_callback():
+    global interrupted
+    return interrupted
+
+if len(sys.argv) == 1:
+    print("Error: need to specify model name")
+    print("Usage: python demo.py your.model")
+    sys.exit(-1)
+
+model = sys.argv[1]
+
+# capture SIGINT signal, e.g., Ctrl+C
+signal.signal(signal.SIGINT, signal_handler)
+
+detector = snowboydecoder.HotwordDetector(model, sensitivity=0.5)
+print('Listening... Press Ctrl+C to exit')
+
+# main loop
+detector.start(detected_callback=snowboydecoder.play_audio_file,
+               interrupt_check=interrupt_callback,
+               sleep_time=0.03)
+
+detector.terminate()

+ 41 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python3/demo2.py

@@ -0,0 +1,41 @@
+import snowboydecoder
+import sys
+import signal
+
+# Demo code for listening to two hotwords at the same time
+
+interrupted = False
+
+
+def signal_handler(signal, frame):
+    global interrupted
+    interrupted = True
+
+
+def interrupt_callback():
+    global interrupted
+    return interrupted
+
+if len(sys.argv) != 3:
+    print("Error: need to specify 2 model names")
+    print("Usage: python demo.py 1st.model 2nd.model")
+    sys.exit(-1)
+
+models = sys.argv[1:]
+
+# capture SIGINT signal, e.g., Ctrl+C
+signal.signal(signal.SIGINT, signal_handler)
+
+sensitivity = [0.5]*len(models)
+detector = snowboydecoder.HotwordDetector(models, sensitivity=sensitivity)
+callbacks = [lambda: snowboydecoder.play_audio_file(snowboydecoder.DETECT_DING),
+             lambda: snowboydecoder.play_audio_file(snowboydecoder.DETECT_DONG)]
+print('Listening... Press Ctrl+C to exit')
+
+# main loop
+# make sure you have the same numbers of callbacks and models
+detector.start(detected_callback=callbacks,
+               interrupt_check=interrupt_callback,
+               sleep_time=0.03)
+
+detector.terminate()

+ 40 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python3/demo3.py

@@ -0,0 +1,40 @@
+import snowboydecoder
+import sys
+import wave
+
+# Demo code for detecting hotword in a .wav file
+# Example Usage:
+#  $ python demo3.py resources/snowboy.wav resources/models/snowboy.umdl
+# Should print:
+#  Hotword Detected!
+#
+#  $ python demo3.py resources/ding.wav resources/models/snowboy.umdl
+# Should print:
+#  Hotword Not Detected!
+
+
+if len(sys.argv) != 3:
+    print("Error: need to specify wave file name and model name")
+    print("Usage: python demo3.py wave_file model_file")
+    sys.exit(-1)
+
+wave_file = sys.argv[1]
+model_file = sys.argv[2]
+
+f = wave.open(wave_file)
+assert f.getnchannels() == 1, "Error: Snowboy only supports 1 channel of audio (mono, not stereo)"
+assert f.getframerate() == 16000, "Error: Snowboy only supports 16K sampling rate"
+assert f.getsampwidth() == 2, "Error: Snowboy only supports 16bit per sample"
+data = f.readframes(f.getnframes())
+f.close()
+
+sensitivity = 0.5
+detection = snowboydecoder.HotwordDetector(model_file, sensitivity=sensitivity)
+
+ans = detection.detector.RunDetection(data)
+
+if ans == 1:
+    print('Hotword Detected!')
+else:
+    print('Hotword Not Detected!')
+

+ 75 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python3/demo4.py

@@ -0,0 +1,75 @@
+import snowboydecoder
+import sys
+import signal
+import speech_recognition as sr
+import os
+
+"""
+This demo file shows you how to use the new_message_callback to interact with
+the recorded audio after a keyword is spoken. It uses the speech recognition
+library in order to convert the recorded audio into text.
+
+Information on installing the speech recognition library can be found at:
+https://pypi.python.org/pypi/SpeechRecognition/
+"""
+
+
+interrupted = False
+
+
+def audioRecorderCallback(fname):
+    print("converting audio to text")
+    r = sr.Recognizer()
+    with sr.AudioFile(fname) as source:
+        audio = r.record(source)  # read the entire audio file
+    # recognize speech using Google Speech Recognition
+    try:
+        # for testing purposes, we're just using the default API key
+        # to use another API key, use `r.recognize_google(audio, key="GOOGLE_SPEECH_RECOGNITION_API_KEY")`
+        # instead of `r.recognize_google(audio)`
+        print(r.recognize_google(audio))
+    except sr.UnknownValueError:
+        print("Google Speech Recognition could not understand audio")
+    except sr.RequestError as e:
+        print("Could not request results from Google Speech Recognition service; {0}".format(e))
+
+    os.remove(fname)
+
+
+
+def detectedCallback():
+  print('recording audio...', end='', flush=True)
+
+def signal_handler(signal, frame):
+    global interrupted
+    interrupted = True
+
+
+def interrupt_callback():
+    global interrupted
+    return interrupted
+
+if len(sys.argv) == 1:
+    print("Error: need to specify model name")
+    print("Usage: python demo.py your.model")
+    sys.exit(-1)
+
+model = sys.argv[1]
+
+# capture SIGINT signal, e.g., Ctrl+C
+signal.signal(signal.SIGINT, signal_handler)
+
+detector = snowboydecoder.HotwordDetector(model, sensitivity=0.38)
+print('Listening... Press Ctrl+C to exit')
+
+# main loop
+detector.start(detected_callback=detectedCallback,
+               audio_recorder_callback=audioRecorderCallback,
+               interrupt_check=interrupt_callback,
+               sleep_time=0.01)
+
+detector.terminate()
+
+
+
+

+ 1 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python3/requirements.txt

@@ -0,0 +1 @@
+../Python/requirements.txt

+ 1 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python3/resources

@@ -0,0 +1 @@
+../../resources/

+ 253 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/Python3/snowboydecoder.py

@@ -0,0 +1,253 @@
+#!/usr/bin/env python
+
+import collections
+import pyaudio
+from . import snowboydetect
+import time
+import wave
+import os
+import logging
+
+logging.basicConfig()
+logger = logging.getLogger("snowboy")
+logger.setLevel(logging.INFO)
+TOP_DIR = os.path.dirname(os.path.abspath(__file__))
+
+RESOURCE_FILE = os.path.join(TOP_DIR, "resources/common.res")
+DETECT_DING = os.path.join(TOP_DIR, "resources/ding.wav")
+DETECT_DONG = os.path.join(TOP_DIR, "resources/dong.wav")
+
+
+class RingBuffer(object):
+    """Ring buffer to hold audio from PortAudio"""
+
+    def __init__(self, size=4096):
+        self._buf = collections.deque(maxlen=size)
+
+    def extend(self, data):
+        """Adds data to the end of buffer"""
+        self._buf.extend(data)
+
+    def get(self):
+        """Retrieves data from the beginning of buffer and clears it"""
+        tmp = bytes(bytearray(self._buf))
+        self._buf.clear()
+        return tmp
+
+
+def play_audio_file(fname=DETECT_DING):
+    """Simple callback function to play a wave file. By default it plays
+    a Ding sound.
+
+    :param str fname: wave file name
+    :return: None
+    """
+    ding_wav = wave.open(fname, 'rb')
+    ding_data = ding_wav.readframes(ding_wav.getnframes())
+    audio = pyaudio.PyAudio()
+    stream_out = audio.open(
+        format=audio.get_format_from_width(ding_wav.getsampwidth()),
+        channels=ding_wav.getnchannels(),
+        rate=ding_wav.getframerate(), input=False, output=True)
+    stream_out.start_stream()
+    stream_out.write(ding_data)
+    time.sleep(0.2)
+    stream_out.stop_stream()
+    stream_out.close()
+    audio.terminate()
+
+
+class HotwordDetector(object):
+    """
+    Snowboy decoder to detect whether a keyword specified by `decoder_model`
+    exists in a microphone input stream.
+
+    :param decoder_model: decoder model file path, a string or a list of strings
+    :param resource: resource file path.
+    :param sensitivity: decoder sensitivity, a float of a list of floats.
+                              The bigger the value, the more senstive the
+                              decoder. If an empty list is provided, then the
+                              default sensitivity in the model will be used.
+    :param audio_gain: multiply input volume by this factor.
+    """
+
+    def __init__(self, decoder_model,
+                 resource=RESOURCE_FILE,
+                 sensitivity=[],
+                 audio_gain=1):
+
+        tm = type(decoder_model)
+        ts = type(sensitivity)
+        if tm is not list:
+            decoder_model = [decoder_model]
+        if ts is not list:
+            sensitivity = [sensitivity]
+        model_str = ",".join(decoder_model)
+
+        self.detector = snowboydetect.SnowboyDetect(
+            resource_filename=resource.encode(), model_str=model_str.encode())
+        self.detector.SetAudioGain(audio_gain)
+        self.num_hotwords = self.detector.NumHotwords()
+
+        if len(decoder_model) > 1 and len(sensitivity) == 1:
+            sensitivity = sensitivity * self.num_hotwords
+        if len(sensitivity) != 0:
+            assert self.num_hotwords == len(sensitivity), \
+                "number of hotwords in decoder_model (%d) and sensitivity " \
+                "(%d) does not match" % (self.num_hotwords, len(sensitivity))
+        sensitivity_str = ",".join([str(t) for t in sensitivity])
+        if len(sensitivity) != 0:
+            self.detector.SetSensitivity(sensitivity_str.encode())
+
+        self.ring_buffer = RingBuffer(
+            self.detector.NumChannels() * self.detector.SampleRate() * 5)
+
+    def start(self, detected_callback=play_audio_file,
+              interrupt_check=lambda: False,
+              sleep_time=0.03,
+              audio_recorder_callback=None,
+              silent_count_threshold=15,
+              recording_timeout=100):
+        """
+        Start the voice detector. For every `sleep_time` second it checks the
+        audio buffer for triggering keywords. If detected, then call
+        corresponding function in `detected_callback`, which can be a single
+        function (single model) or a list of callback functions (multiple
+        models). Every loop it also calls `interrupt_check` -- if it returns
+        True, then breaks from the loop and return.
+
+        :param detected_callback: a function or list of functions. The number of
+                                  items must match the number of models in
+                                  `decoder_model`.
+        :param interrupt_check: a function that returns True if the main loop
+                                needs to stop.
+        :param float sleep_time: how much time in second every loop waits.
+        :param audio_recorder_callback: if specified, this will be called after
+                                        a keyword has been spoken and after the
+                                        phrase immediately after the keyword has
+                                        been recorded. The function will be
+                                        passed the name of the file where the
+                                        phrase was recorded.
+        :param silent_count_threshold: indicates how long silence must be heard
+                                       to mark the end of a phrase that is
+                                       being recorded.
+        :param recording_timeout: limits the maximum length of a recording.
+        :return: None
+        """
+        self._running = True
+
+        def audio_callback(in_data, frame_count, time_info, status):
+            self.ring_buffer.extend(in_data)
+            play_data = chr(0) * len(in_data)
+            return play_data, pyaudio.paContinue
+
+        self.audio = pyaudio.PyAudio()
+        self.stream_in = self.audio.open(
+            input=True, output=False,
+            format=self.audio.get_format_from_width(
+                self.detector.BitsPerSample() / 8),
+            channels=self.detector.NumChannels(),
+            rate=self.detector.SampleRate(),
+            frames_per_buffer=2048,
+            stream_callback=audio_callback)
+
+        if interrupt_check():
+            logger.debug("detect voice return")
+            return
+
+        tc = type(detected_callback)
+        if tc is not list:
+            detected_callback = [detected_callback]
+        if len(detected_callback) == 1 and self.num_hotwords > 1:
+            detected_callback *= self.num_hotwords
+
+        assert self.num_hotwords == len(detected_callback), \
+            "Error: hotwords in your models (%d) do not match the number of " \
+            "callbacks (%d)" % (self.num_hotwords, len(detected_callback))
+
+        logger.debug("detecting...")
+
+        state = "PASSIVE"
+        while self._running is True:
+            if interrupt_check():
+                logger.debug("detect voice break")
+                break
+            data = self.ring_buffer.get()
+            if len(data) == 0:
+                time.sleep(sleep_time)
+                continue
+
+            status = self.detector.RunDetection(data)
+            if status == -1:
+                logger.warning("Error initializing streams or reading audio data")
+
+            #small state machine to handle recording of phrase after keyword
+            if state == "PASSIVE":
+                if status > 0: #key word found
+                    self.recordedData = []
+                    self.recordedData.append(data)
+                    silentCount = 0
+                    recordingCount = 0
+                    message = "Keyword " + str(status) + " detected at time: "
+                    message += time.strftime("%Y-%m-%d %H:%M:%S",
+                                         time.localtime(time.time()))
+                    logger.info(message)
+                    callback = detected_callback[status-1]
+                    if callback is not None:
+                        callback()
+
+                    if audio_recorder_callback is not None:
+                        state = "ACTIVE"
+                    continue
+
+            elif state == "ACTIVE":
+                stopRecording = False
+                if recordingCount > recording_timeout:
+                    stopRecording = True
+                elif status == -2: #silence found
+                    if silentCount > silent_count_threshold:
+                        stopRecording = True
+                    else:
+                        silentCount = silentCount + 1
+                elif status == 0: #voice found
+                    silentCount = 0
+
+                if stopRecording == True:
+                    fname = self.saveMessage()
+                    audio_recorder_callback(fname)
+                    state = "PASSIVE"
+                    continue
+
+                recordingCount = recordingCount + 1
+                self.recordedData.append(data)
+
+        logger.debug("finished.")
+
+    def saveMessage(self):
+        """
+        Save the message stored in self.recordedData to a timestamped file.
+        """
+        filename = 'output' + str(int(time.time())) + '.wav'
+        data = b''.join(self.recordedData)
+
+        #use wave to save data
+        wf = wave.open(filename, 'wb')
+        wf.setnchannels(1)
+        wf.setsampwidth(self.audio.get_sample_size(
+            self.audio.get_format_from_width(
+                self.detector.BitsPerSample() / 8)))
+        wf.setframerate(self.detector.SampleRate())
+        wf.writeframes(data)
+        wf.close()
+        logger.debug("finished saving: " + filename)
+        return filename
+
+    def terminate(self):
+        """
+        Terminate audio stream. Users can call start() again to detect.
+        :return: None
+        """
+        self.stream_in.stop_stream()
+        self.stream_in.close()
+        self.audio.terminate()
+        self._running = False

+ 52 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/REST_API/training_service.py

@@ -0,0 +1,52 @@
+#! /usr/bin/evn python
+
+import sys
+import base64
+import requests
+
+
+def get_wave(fname):
+    with open(fname) as infile:
+        return base64.b64encode(infile.read())
+
+
+endpoint = "https://snowboy.kitt.ai/api/v1/train/"
+
+############# MODIFY THE FOLLOWING #############
+token = ""
+hotword_name = "???"
+language = "en"
+age_group = "20_29"
+gender = "M"
+microphone = "??" # e.g., macbook pro microphone
+############### END OF MODIFY ##################
+
+if __name__ == "__main__":
+    try:
+        [_, wav1, wav2, wav3, out] = sys.argv
+    except ValueError:
+        print "Usage: %s wave_file1 wave_file2 wave_file3 out_model_name" % sys.argv[0]
+        sys.exit()
+
+    data = {
+        "name": hotword_name,
+        "language": language,
+        "age_group": age_group,
+        "gender": gender,
+        "microphone": microphone,
+        "token": token,
+        "voice_samples": [
+            {"wave": get_wave(wav1)},
+            {"wave": get_wave(wav2)},
+            {"wave": get_wave(wav3)}
+        ]
+    }
+
+    response = requests.post(endpoint, json=data)
+    if response.ok:
+        with open(out, "w") as outfile:
+            outfile.write(response.content)
+        print "Saved model to '%s'." % out
+    else:
+        print "Request failed."
+        print response.text

+ 39 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/examples/REST_API/training_service.sh

@@ -0,0 +1,39 @@
+#! /usr/bin/env bash
+ENDPOINT="https://snowboy.kitt.ai/api/v1/train/"
+
+############# MODIFY THE FOLLOWING #############
+TOKEN="??"
+NAME="??"
+LANGUAGE="en"
+AGE_GROUP="20_29"
+GENDER="M"
+MICROPHONE="??" # e.g., PS3 Eye
+############### END OF MODIFY ##################
+
+if [[ "$#" != 4 ]]; then
+    printf "Usage: %s wave_file1 wave_file2 wave_file3 out_model_name" $0
+    exit
+fi
+
+WAV1=`base64 $1`
+WAV2=`base64 $2`
+WAV3=`base64 $3`
+OUTFILE="$4"
+
+cat <<EOF >data.json
+{
+    "name": "$NAME",
+    "language": "$LANGUAGE",
+    "age_group": "$AGE_GROUP",
+    "token": "$TOKEN",
+    "gender": "$GENDER",
+    "microphone": "$MICROPHONE",
+    "voice_samples": [
+        {"wave": "$WAV1"},
+        {"wave": "$WAV2"},
+        {"wave": "$WAV3"}
+    ]
+}
+EOF
+
+curl -H "Content-Type: application/json" -X POST -d @data.json $ENDPOINT > $OUTFILE

+ 220 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/include/snowboy-detect.h

@@ -0,0 +1,220 @@
+// include/snowboy-detect.h
+
+// Copyright 2016  KITT.AI (author: Guoguo Chen)
+
+#ifndef SNOWBOY_INCLUDE_SNOWBOY_DETECT_H_
+#define SNOWBOY_INCLUDE_SNOWBOY_DETECT_H_
+
+#include <memory>
+#include <string>
+
+namespace snowboy {
+
+// Forward declaration.
+struct WaveHeader;
+class PipelineDetect;
+class PipelineVad;
+
+////////////////////////////////////////////////////////////////////////////////
+//
+// SnowboyDetect class interface.
+//
+////////////////////////////////////////////////////////////////////////////////
+class SnowboyDetect {
+ public:
+  // Constructor that takes a resource file, and a list of hotword models which
+  // are separated by comma. In the case that more than one hotword exist in the
+  // provided models, RunDetection() will return the index of the hotword, if
+  // the corresponding hotword is triggered.
+  //
+  // CAVEAT: a personal model only contain one hotword, but an universal model
+  //         may contain multiple hotwords. It is your responsibility to figure
+  //         out the index of the hotword. For example, if your model string is
+  //         "foo.pmdl,bar.umdl", where foo.pmdl contains hotword x, bar.umdl
+  //         has two hotwords y and z, the indices of different hotwords are as
+  //         follows:
+  //         x 1
+  //         y 2
+  //         z 3
+  //
+  // @param [in]  resource_filename   Filename of resource file.
+  // @param [in]  model_str           A string of multiple hotword models,
+  //                                  separated by comma.
+  SnowboyDetect(const std::string& resource_filename,
+                const std::string& model_str);
+
+  // Resets the detection. This class handles voice activity detection (VAD)
+  // internally. But if you have an external VAD, you should call Reset()
+  // whenever you see segment end from your VAD.
+  bool Reset();
+
+  // Runs hotword detection. Supported audio format is WAVE (with linear PCM,
+  // 8-bits unsigned integer, 16-bits signed integer or 32-bits signed integer).
+  // See SampleRate(), NumChannels() and BitsPerSample() for the required
+  // sampling rate, number of channels and bits per sample values. You are
+  // supposed to provide a small chunk of data (e.g., 0.1 second) each time you
+  // call RunDetection(). Larger chunk usually leads to longer delay, but less
+  // CPU usage.
+  //
+  // Definition of return values:
+  // -2: Silence.
+  // -1: Error.
+  //  0: No event.
+  //  1: Hotword 1 triggered.
+  //  2: Hotword 2 triggered.
+  //  ...
+  //
+  // @param [in]  data               Small chunk of data to be detected. See
+  //                                 above for the supported data format.
+  // @param [in]  is_end             Set it to true if it is the end of a
+  //                                 utterance or file.
+  int RunDetection(const std::string& data, bool is_end = false);
+
+  // Various versions of RunDetection() that take different format of audio. If
+  // NumChannels() > 1, e.g., NumChannels() == 2, then the array is as follows:
+  //
+  //   d1c1, d1c2, d2c1, d2c2, d3c1, d3c2, ..., dNc1, dNc2
+  //
+  // where d1c1 means data point 1 of channel 1.
+  //
+  // @param [in]  data               Small chunk of data to be detected. See
+  //                                 above for the supported data format.
+  // @param [in]  array_length       Length of the data array.
+  // @param [in]  is_end             Set it to true if it is the end of a
+  //                                 utterance or file.
+  int RunDetection(const float* const data,
+                   const int array_length, bool is_end = false);
+  int RunDetection(const int16_t* const data,
+                   const int array_length, bool is_end = false);
+  int RunDetection(const int32_t* const data,
+                   const int array_length, bool is_end = false);
+
+  // Sets the sensitivity string for the loaded hotwords. A <sensitivity_str> is
+  // a list of floating numbers between 0 and 1, and separated by comma. For
+  // example, if there are 3 loaded hotwords, your string should looks something
+  // like this:
+  //   0.4,0.5,0.8
+  // Make sure you properly align the sensitivity value to the corresponding
+  // hotword.
+  void SetSensitivity(const std::string& sensitivity_str);
+
+  // Similar to the sensitivity setting above. When set higher than the above
+  // sensitivity, the algorithm automatically chooses between the normal
+  // sensitivity set above and the higher sensitivity set here, to maximize the
+  // performance. By default, it is not set, which means the algorithm will
+  // stick with the sensitivity set above.
+  void SetHighSensitivity(const std::string& high_sensitivity_str);
+
+  // Returns the sensitivity string for the current hotwords.
+  std::string GetSensitivity() const;
+
+  // Applied a fixed gain to the input audio. In case you have a very weak
+  // microphone, you can use this function to boost input audio level.
+  void SetAudioGain(const float audio_gain);
+
+  // Writes the models to the model filenames specified in <model_str> in the
+  // constructor. This overwrites the original model with the latest parameter
+  // setting. You are supposed to call this function if you have updated the
+  // hotword sensitivities through SetSensitivity(), and you would like to store
+  // those values in the model as the default value.
+  void UpdateModel() const;
+
+  // Returns the number of the loaded hotwords. This helps you to figure the
+  // index of the hotwords.
+  int NumHotwords() const;
+
+  // If <apply_frontend> is true, then apply frontend audio processing;
+  // otherwise turns the audio processing off.
+  void ApplyFrontend(const bool apply_frontend);
+
+  // Returns the required sampling rate, number of channels and bits per sample
+  // values for the audio data. You should use this information to set up your
+  // audio capturing interface.
+  int SampleRate() const;
+  int NumChannels() const;
+  int BitsPerSample() const;
+
+  ~SnowboyDetect();
+
+ private:
+  std::unique_ptr<WaveHeader> wave_header_;
+  std::unique_ptr<PipelineDetect> detect_pipeline_;
+};
+
+////////////////////////////////////////////////////////////////////////////////
+//
+// SnowboyVad class interface.
+//
+////////////////////////////////////////////////////////////////////////////////
+class SnowboyVad {
+ public:
+  // Constructor that takes a resource file. It shares the same resource file
+  // with SnowboyDetect.
+  SnowboyVad(const std::string& resource_filename);
+
+  // Resets the VAD.
+  bool Reset();
+
+  // Runs the VAD algorithm. Supported audio format is WAVE (with linear PCM,
+  // 8-bits unsigned integer, 16-bits signed integer or 32-bits signed integer).
+  // See SampleRate(), NumChannels() and BitsPerSample() for the required
+  // sampling rate, number of channels and bits per sample values. You are
+  // supposed to provide a small chunk of data (e.g., 0.1 second) each time you
+  // call RunDetection(). Larger chunk usually leads to longer delay, but less
+  // CPU usage.
+  //
+  // Definition of return values:
+  // -2: Silence.
+  // -1: Error.
+  //  0: Non-silence.
+  //
+  // @param [in]  data               Small chunk of data to be detected. See
+  //                                 above for the supported data format.
+  // @param [in]  is_end             Set it to true if it is the end of a
+  //                                 utterance or file.
+  int RunVad(const std::string& data, bool is_end = false);
+
+  // Various versions of RunVad() that take different format of audio. If
+  // NumChannels() > 1, e.g., NumChannels() == 2, then the array is as follows:
+  //
+  //   d1c1, d1c2, d2c1, d2c2, d3c1, d3c2, ..., dNc1, dNc2
+  //
+  // where d1c1 means data point 1 of channel 1.
+  //
+  // @param [in]  data               Small chunk of data to be detected. See
+  //                                 above for the supported data format.
+  // @param [in]  array_length       Length of the data array.
+  // @param [in]  is_end             Set it to true if it is the end of a
+  //                                 utterance or file.
+  int RunVad(const float* const data,
+             const int array_length, bool is_end = false);
+  int RunVad(const int16_t* const data,
+             const int array_length, bool is_end = false);
+  int RunVad(const int32_t* const data,
+             const int array_length, bool is_end = false);
+
+  // Applied a fixed gain to the input audio. In case you have a very weak
+  // microphone, you can use this function to boost input audio level.
+  void SetAudioGain(const float audio_gain);
+
+  // If <apply_frontend> is true, then apply frontend audio processing;
+  // otherwise turns the audio processing off.
+  void ApplyFrontend(const bool apply_frontend);
+
+  // Returns the required sampling rate, number of channels and bits per sample
+  // values for the audio data. You should use this information to set up your
+  // audio capturing interface.
+  int SampleRate() const;
+  int NumChannels() const;
+  int BitsPerSample() const;
+
+  ~SnowboyVad();
+
+ private:
+  std::unique_ptr<WaveHeader> wave_header_;
+  std::unique_ptr<PipelineVad> vad_pipeline_;
+};
+
+}  // namespace snowboy
+
+#endif  // SNOWBOY_INCLUDE_SNOWBOY_DETECT_H_

二進制
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/lib/libsnowboy-detect.a


+ 43 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/package.json

@@ -0,0 +1,43 @@
+{
+  "name": "snowboy",
+  "version": "1.3.0",
+  "description": "Snowboy is a customizable hotword detection engine",
+  "main": "lib/node/index.js",
+  "typings": "lib/node/index.d.ts",
+  "binary": {
+    "module_name": "snowboy",
+    "module_path": "./lib/node/binding/{configuration}/{node_abi}-{platform}-{arch}/",
+    "remote_path": "./{module_name}/v{version}/{configuration}/",
+    "package_name": "{module_name}-v{version}-{node_abi}-{platform}-{arch}.tar.gz",
+    "host": "https://snowboy-release-node.s3-us-west-2.amazonaws.com"
+  },
+  "scripts": {
+    "preinstall": "npm install node-pre-gyp",
+    "install": "node-pre-gyp install --fallback-to-build",
+    "test": "node index.js",
+    "prepublish": "tsc --listFiles"
+  },
+  "author": "KITT.AI <snowboy@kitt.ai>",
+  "contributors": [
+    "Leandre Gohy <leandre.gohy@hexeo.be>",
+    "Evan Cohen <evanbtcohen@gmail.com>"
+  ],
+  "repository": {
+    "type": "git",
+    "url": "git+https://github.com/Kitt-AI/snowboy.git"
+  },
+  "gypfile": true,
+  "license": "Apache-2.0",
+  "dependencies": {
+    "node-pre-gyp": "^0.6.30"
+  },
+  "devDependencies": {
+    "@types/node": "^6.0.38",
+    "aws-sdk": "2.x",
+    "nan": "^2.4.0",
+    "typescript": "^2.0.2"
+  },
+  "bugs": {
+    "url": "https://github.com/Kitt-AI/snowboy/issues"
+  }
+}

+ 0 - 0
resources/common.res → catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/resources/common.res


二進制
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/resources/models/jarvis.umdl


二進制
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/resources/models/smart_mirror.umdl


+ 0 - 0
resources/models/snowboy.umdl → catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/resources/models/snowboy.umdl


二進制
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/resources/snowboy.raw


+ 23 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/scripts/publish-node.sh

@@ -0,0 +1,23 @@
+#!/bin/bash
+
+
+NODE_VERSIONS=( "4.0.0" "5.0.0" "6.0.0" "7.0.0" "8.0.0" "9.0.0")
+
+# Makes sure nvm is installed.
+if ! which nvm > /dev/null; then
+  rm -rf ~/.nvm/ &&\
+    git clone --depth 1 https://github.com/creationix/nvm.git ~/.nvm
+  source ~/.nvm/nvm.sh
+fi
+
+for i in "${NODE_VERSIONS[@]}"; do
+   # Installs and use the correct version of node
+   nvm install $i
+   nvm use $i
+
+   # build, package and publish for the current package version
+   npm install nan
+   npm install aws-sdk
+   npm install node-pre-gyp
+   ./node_modules/.bin/node-pre-gyp clean configure build package publish
+done

+ 61 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/setup.py

@@ -0,0 +1,61 @@
+import os
+import sys
+from setuptools import setup, find_packages
+from distutils.command.build import build
+from distutils.dir_util import copy_tree
+from subprocess import call
+
+
+py_dir = 'Python' if sys.version_info[0] < 3 else 'Python3'
+
+class SnowboyBuild(build):
+
+    def run(self):
+
+        cmd = ['make']
+        swig_dir = os.path.join('swig', py_dir)
+        def compile():
+            call(cmd, cwd=swig_dir)
+
+        self.execute(compile, [], 'Compiling snowboy...')
+
+        # copy generated .so to build folder
+        self.mkpath(self.build_lib)
+        snowboy_build_lib = os.path.join(self.build_lib, 'snowboy')
+        self.mkpath(snowboy_build_lib)
+        target_file = os.path.join(swig_dir, '_snowboydetect.so')
+        if not self.dry_run:
+            self.copy_file(target_file,
+                           snowboy_build_lib)
+
+            # copy resources too since it is a symlink
+            resources_dir = 'resources'
+            resources_dir_on_build = os.path.join(snowboy_build_lib,
+                                                  'resources')
+            copy_tree(resources_dir, resources_dir_on_build)
+
+        build.run(self)
+
+
+setup(
+    name='snowboy',
+    version='1.3.0',
+    description='Snowboy is a customizable hotword detection engine',
+    maintainer='KITT.AI',
+    maintainer_email='snowboy@kitt.ai',
+    license='Apache-2.0',
+    url='https://snowboy.kitt.ai',
+    packages=find_packages(os.path.join('examples', py_dir)),
+    package_dir={'snowboy': os.path.join('examples', py_dir)},
+    py_modules=['snowboy.snowboydecoder', 'snowboy.snowboydetect'],
+    package_data={'snowboy': ['resources/*']},
+    zip_safe=False,
+    long_description="",
+    classifiers=[],
+    install_requires=[
+        'PyAudio',
+    ],
+    cmdclass={
+        'build': SnowboyBuild
+    }
+)

+ 24 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/swig/Python/snowboy-detect-swig.i

@@ -0,0 +1,24 @@
+// swig/Python/snowboy-detect-swig.i
+
+// Copyright 2016  KITT.AI (author: Guoguo Chen)
+
+%module snowboydetect
+
+// Suppress SWIG warnings.
+#pragma SWIG nowarn=SWIGWARN_PARSE_NESTED_CLASS
+%include "std_string.i"
+
+%{
+#include "include/snowboy-detect.h"
+%}
+
+%include "include/snowboy-detect.h"
+
+// below is Python 3 support, however,
+// adding it will generate wrong .so file
+// for Fedora 25 on ARMv7. So be sure to 
+// comment them when you compile for 
+// Fedora 25 on ARMv7.
+%begin %{
+#define SWIG_PYTHON_STRICT_BYTE_CHAR
+%}

+ 24 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/swig/Python3/snowboy-detect-swig.i

@@ -0,0 +1,24 @@
+// swig/Python/snowboy-detect-swig.i
+
+// Copyright 2016  KITT.AI (author: Guoguo Chen)
+
+%module snowboydetect
+
+// Suppress SWIG warnings.
+#pragma SWIG nowarn=SWIGWARN_PARSE_NESTED_CLASS
+%include "std_string.i"
+
+%{
+#include "include/snowboy-detect.h"
+%}
+
+%include "include/snowboy-detect.h"
+
+// below is Python 3 support, however,
+// adding it will generate wrong .so file
+// for Fedora 25 on ARMv7. So be sure to 
+// comment them when you compile for 
+// Fedora 25 on ARMv7.
+%begin %{
+#define SWIG_PYTHON_STRICT_BYTE_CHAR
+%}

+ 34 - 0
catkin_ws/src/snowboy_wakeup/3rdparty/snowboy/tsconfig.json

@@ -0,0 +1,34 @@
+{
+    "compilerOptions": {
+        "target": "es6",
+        "module": "commonjs",
+        "moduleResolution": "node",
+        "isolatedModules": false,
+        "jsx": "react",
+        "experimentalDecorators": false,
+        "emitDecoratorMetadata": false,
+        "declaration": true,
+        "noImplicitAny": true,
+        "noImplicitUseStrict": false,
+        "noFallthroughCasesInSwitch": true,
+        "noImplicitReturns": true,
+        "removeComments": true,
+        "noLib": false,
+        "preserveConstEnums": true,
+        "suppressImplicitAnyIndexErrors": true
+    },
+    "files": [
+      "lib/node/index.ts",
+      "lib/node/node-pre-gyp.d.ts",
+      "lib/node/SnowboyDetectNative.d.ts",
+      "node_modules/@types/node/index.d.ts"
+    ],
+    "exclude": [
+      "node_modules"
+    ],
+    "compileOnSave": true,
+    "buildOnSave": false,
+    "atom": {
+        "rewriteTsconfig": false
+    }
+}

+ 100 - 0
catkin_ws/src/snowboy_wakeup/CMakeLists.txt

@@ -0,0 +1,100 @@
+cmake_minimum_required(VERSION 2.8.3)
+project(snowboy_wakeup)
+
+find_package(catkin REQUIRED COMPONENTS
+    roscpp
+    audio_common_msgs
+    dynamic_reconfigure
+)
+
+set(CMAKE_CXX_FLAGS "-std=c++0x ${CMAKE_CXX_FLAGS}")
+set(CMAKE_MODULE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/cmake_modules)
+
+find_package(BLAS)
+
+# ------------------------------------------------------------------------------------------------
+#                                     ROS MESSAGES AND SERVICES
+# ------------------------------------------------------------------------------------------------
+
+# Generate services
+# add_service_files(
+#    FILES
+#    service1.srv
+#    ...
+# )
+
+# Generate added messages and services with any dependencies listed here
+# generate_messages(
+#    DEPENDENCIES
+#    geometry_msgs
+#    ...
+# )
+
+#add dynamic reconfigure api
+#find_package(catkin REQUIRED dynamic_reconfigure)
+generate_dynamic_reconfigure_options(
+    cfg/SnowboyReconfigure.cfg
+)
+
+# ------------------------------------------------------------------------------------------------
+#                                          CATKIN EXPORT
+# ------------------------------------------------------------------------------------------------
+catkin_package(
+#  INCLUDE_DIRS include
+#  LIBRARIES hotword_detector
+#  CATKIN_DEPENDS roscpp audio_common_msgs
+#  DEPENDS system_lib
+)
+
+# ------------------------------------------------------------------------------------------------
+#                                              BUILD
+# ------------------------------------------------------------------------------------------------
+include_directories(
+    include
+    3rdparty
+    ${catkin_INCLUDE_DIRS}
+)
+
+file(GLOB_RECURSE HEADER_FILES include/*.h)
+file(GLOB_RECURSE 3RD_PARTY_FILES 3rdparty/*.h)
+
+add_library(hotword_detector
+    src/hotword_detector.cpp
+    ${HEADER_FILES}
+    ${3RD_PARTY_FILES}
+)
+
+target_link_libraries(hotword_detector
+    ${CMAKE_CURRENT_SOURCE_DIR}/3rdparty/snowboy/lib/libsnowboy-detect.a
+    ${catkin_LIBRARIES}
+    ${BLAS_LIBRARIES}
+)
+
+add_executable(hotword_detector_node
+    src/hotword_detector_node.cpp
+)
+target_link_libraries(hotword_detector_node
+    hotword_detector
+    ${catkin_LIBRARIES}
+)
+add_dependencies(hotword_detector_node ${PROJECT_NAME}_gencfg)
+
+install(
+  TARGETS
+  hotword_detector
+  hotword_detector_node
+  ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
+  LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
+  RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
+)
+
+install(
+  DIRECTORY launch/
+  DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}/launch
+)
+
+install(
+  DIRECTORY resources/
+  DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}/resources
+)
+

+ 11 - 0
catkin_ws/src/snowboy_wakeup/cfg/SnowboyReconfigure.cfg

@@ -0,0 +1,11 @@
+#!/usr/bin/env python
+PACKAGE = "snowboy_wakeup"
+
+from dynamic_reconfigure.parameter_generator_catkin import *
+
+gen = ParameterGenerator()
+
+gen.add("sensitivity", double_t, 0, "Snowboy sensitivity", .7, 0, 1)
+gen.add("audio_gain", double_t, 0, "Snowboy audio gain", 1, 0.01, 10)
+
+exit(gen.generate(PACKAGE, PACKAGE, "SnowboyReconfigure"))

+ 419 - 0
catkin_ws/src/snowboy_wakeup/cmake_modules/FindBLAS.cmake

@@ -0,0 +1,419 @@
+# Find BLAS library
+#
+# This module finds an installed library that implements the BLAS
+# linear-algebra interface (see http://www.netlib.org/blas/).
+# The list of libraries searched for is mainly taken
+# from the autoconf macro file, acx_blas.m4 (distributed at
+# http://ac-archive.sourceforge.net/ac-archive/acx_blas.html).
+#
+# This module sets the following variables:
+#  BLAS_FOUND - set to true if a library implementing the BLAS interface
+#    is found
+#  BLAS_INCLUDE_DIR - Directories containing the BLAS header files
+#  BLAS_DEFINITIONS - Compilation options to use BLAS
+#  BLAS_LINKER_FLAGS - Linker flags to use BLAS (excluding -l
+#    and -L).
+#  BLAS_LIBRARIES_DIR - Directories containing the BLAS libraries.
+#     May be null if BLAS_LIBRARIES contains libraries name using full path.
+#  BLAS_LIBRARIES - List of libraries to link against BLAS interface.
+#     May be null if the compiler supports auto-link (e.g. VC++).
+#  BLAS_USE_FILE - The name of the cmake module to include to compile
+#     applications or libraries using BLAS.
+#
+# This module was modified by CGAL team:
+# - find libraries for a C++ compiler, instead of Fortran
+# - added BLAS_INCLUDE_DIR, BLAS_DEFINITIONS and BLAS_LIBRARIES_DIR
+# - removed BLAS95_LIBRARIES
+
+include(CheckFunctionExists)
+
+
+# This macro checks for the existence of the combination of fortran libraries
+# given by _list.  If the combination is found, this macro checks (using the
+# check_function_exists macro) whether can link against that library
+# combination using the name of a routine given by _name using the linker
+# flags given by _flags.  If the combination of libraries is found and passes
+# the link test, LIBRARIES is set to the list of complete library paths that
+# have been found and DEFINITIONS to the required definitions.
+# Otherwise, LIBRARIES is set to FALSE.
+# N.B. _prefix is the prefix applied to the names of all cached variables that
+# are generated internally and marked advanced by this macro.
+macro(check_fortran_libraries DEFINITIONS LIBRARIES _prefix _name _flags _list _path)
+  #message("DEBUG: check_fortran_libraries(${_list} in ${_path})")
+
+  # Check for the existence of the libraries given by _list
+  set(_libraries_found TRUE)
+  set(_libraries_work FALSE)
+  set(${DEFINITIONS} "")
+  set(${LIBRARIES} "")
+  set(_combined_name)
+  foreach(_library ${_list})
+    set(_combined_name ${_combined_name}_${_library})
+
+    if(_libraries_found)
+      # search first in ${_path}
+      find_library(${_prefix}_${_library}_LIBRARY
+                  NAMES ${_library}
+                  PATHS ${_path} NO_DEFAULT_PATH
+                  )
+      # if not found, search in environment variables and system
+      if ( WIN32 )
+        find_library(${_prefix}_${_library}_LIBRARY
+                    NAMES ${_library}
+                    PATHS ENV LIB
+                    )
+      elseif ( APPLE )
+        find_library(${_prefix}_${_library}_LIBRARY
+                    NAMES ${_library}
+                    PATHS /usr/local/lib /usr/lib /usr/local/lib64 /usr/lib64 ENV DYLD_LIBRARY_PATH
+                    )
+      else ()
+        find_library(${_prefix}_${_library}_LIBRARY
+                    NAMES ${_library}
+                    PATHS /usr/local/lib /usr/lib /usr/local/lib64 /usr/lib64 ENV LD_LIBRARY_PATH
+                    )
+      endif()
+      mark_as_advanced(${_prefix}_${_library}_LIBRARY)
+      set(${LIBRARIES} ${${LIBRARIES}} ${${_prefix}_${_library}_LIBRARY})
+      set(_libraries_found ${${_prefix}_${_library}_LIBRARY})
+    endif(_libraries_found)
+  endforeach(_library ${_list})
+  if(_libraries_found)
+    set(_libraries_found ${${LIBRARIES}})
+  endif()
+
+  # Test this combination of libraries with the Fortran/f2c interface.
+  # We test the Fortran interface first as it is well standardized.
+  if(_libraries_found AND NOT _libraries_work)
+    set(${DEFINITIONS}  "-D${_prefix}_USE_F2C")
+    set(${LIBRARIES}    ${_libraries_found})
+    # Some C++ linkers require the f2c library to link with Fortran libraries.
+    # I do not know which ones, thus I just add the f2c library if it is available.
+    find_package( F2C QUIET )
+    if ( F2C_FOUND )
+      set(${DEFINITIONS}  ${${DEFINITIONS}} ${F2C_DEFINITIONS})
+      set(${LIBRARIES}    ${${LIBRARIES}} ${F2C_LIBRARIES})
+    endif()
+    set(CMAKE_REQUIRED_DEFINITIONS  ${${DEFINITIONS}})
+    set(CMAKE_REQUIRED_LIBRARIES    ${_flags} ${${LIBRARIES}})
+    #message("DEBUG: CMAKE_REQUIRED_DEFINITIONS = ${CMAKE_REQUIRED_DEFINITIONS}")
+    #message("DEBUG: CMAKE_REQUIRED_LIBRARIES = ${CMAKE_REQUIRED_LIBRARIES}")
+    # Check if function exists with f2c calling convention (ie a trailing underscore)
+    check_function_exists(${_name}_ ${_prefix}_${_name}_${_combined_name}_f2c_WORKS)
+    set(CMAKE_REQUIRED_DEFINITIONS} "")
+    set(CMAKE_REQUIRED_LIBRARIES    "")
+    mark_as_advanced(${_prefix}_${_name}_${_combined_name}_f2c_WORKS)
+    set(_libraries_work ${${_prefix}_${_name}_${_combined_name}_f2c_WORKS})
+  endif(_libraries_found AND NOT _libraries_work)
+
+  # If not found, test this combination of libraries with a C interface.
+  # A few implementations (ie ACML) provide a C interface. Unfortunately, there is no standard.
+  if(_libraries_found AND NOT _libraries_work)
+    set(${DEFINITIONS} "")
+    set(${LIBRARIES}   ${_libraries_found})
+    set(CMAKE_REQUIRED_DEFINITIONS "")
+    set(CMAKE_REQUIRED_LIBRARIES   ${_flags} ${${LIBRARIES}})
+    #message("DEBUG: CMAKE_REQUIRED_LIBRARIES = ${CMAKE_REQUIRED_LIBRARIES}")
+    check_function_exists(${_name} ${_prefix}_${_name}${_combined_name}_WORKS)
+    set(CMAKE_REQUIRED_LIBRARIES "")
+    mark_as_advanced(${_prefix}_${_name}${_combined_name}_WORKS)
+    set(_libraries_work ${${_prefix}_${_name}${_combined_name}_WORKS})
+  endif(_libraries_found AND NOT _libraries_work)
+
+  # on failure
+  if(NOT _libraries_work)
+    set(${DEFINITIONS} "")
+    set(${LIBRARIES}   FALSE)
+  endif()
+  #message("DEBUG: ${DEFINITIONS} = ${${DEFINITIONS}}")
+  #message("DEBUG: ${LIBRARIES} = ${${LIBRARIES}}")
+endmacro(check_fortran_libraries)
+
+
+#
+# main
+#
+
+# Is it already configured?
+if (BLAS_LIBRARIES_DIR OR BLAS_LIBRARIES)
+
+  set(BLAS_FOUND TRUE)
+
+else()
+
+  # reset variables
+  set( BLAS_INCLUDE_DIR "" )
+  set( BLAS_DEFINITIONS "" )
+  set( BLAS_LINKER_FLAGS "" )
+  set( BLAS_LIBRARIES "" )
+  set( BLAS_LIBRARIES_DIR "" )
+
+    #
+    # If Unix, search for BLAS function in possible libraries
+    #
+
+    # BLAS in ATLAS library? (http://math-atlas.sourceforge.net/)
+    if(NOT BLAS_LIBRARIES)
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      sgemm
+      ""
+      "cblas;f77blas;atlas"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+    endif()
+
+    # BLAS in PhiPACK libraries? (requires generic BLAS lib, too)
+    if(NOT BLAS_LIBRARIES)
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      sgemm
+      ""
+      "sgemm;dgemm;blas"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+    endif()
+
+    # BLAS in Alpha CXML library?
+    if(NOT BLAS_LIBRARIES)
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      sgemm
+      ""
+      "cxml"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+    endif()
+
+    # BLAS in Alpha DXML library? (now called CXML, see above)
+    if(NOT BLAS_LIBRARIES)
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      sgemm
+      ""
+      "dxml"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+    endif()
+
+    # BLAS in Sun Performance library?
+    if(NOT BLAS_LIBRARIES)
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      sgemm
+      "-xlic_lib=sunperf"
+      "sunperf;sunmath"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+      if(BLAS_LIBRARIES)
+        # Extra linker flag
+        set(BLAS_LINKER_FLAGS "-xlic_lib=sunperf")
+      endif()
+    endif()
+
+    # BLAS in SCSL library?  (SGI/Cray Scientific Library)
+    if(NOT BLAS_LIBRARIES)
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      sgemm
+      ""
+      "scsl"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+    endif()
+
+    # BLAS in SGIMATH library?
+    if(NOT BLAS_LIBRARIES)
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      sgemm
+      ""
+      "complib.sgimath"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+    endif()
+
+    # BLAS in IBM ESSL library? (requires generic BLAS lib, too)
+    if(NOT BLAS_LIBRARIES)
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      sgemm
+      ""
+      "essl;blas"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+    endif()
+
+    #BLAS in intel mkl 10 library? (em64t 64bit)
+    if(NOT BLAS_LIBRARIES)
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      sgemm
+      ""
+      "mkl_intel_lp64;mkl_intel_thread;mkl_core;guide;pthread"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+    endif()
+
+    ### windows version of intel mkl 10?
+    if(NOT BLAS_LIBRARIES)
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      SGEMM
+      ""
+      "mkl_c_dll;mkl_intel_thread_dll;mkl_core_dll;libguide40"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+    endif()
+
+    #older versions of intel mkl libs
+
+    # BLAS in intel mkl library? (shared)
+    if(NOT BLAS_LIBRARIES)
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      sgemm
+      ""
+      "mkl;guide;pthread"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+    endif()
+
+    #BLAS in intel mkl library? (static, 32bit)
+    if(NOT BLAS_LIBRARIES)
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      sgemm
+      ""
+      "mkl_ia32;guide;pthread"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+    endif()
+
+    #BLAS in intel mkl library? (static, em64t 64bit)
+    if(NOT BLAS_LIBRARIES)
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      sgemm
+      ""
+      "mkl_em64t;guide;pthread"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+    endif()
+
+    #BLAS in acml library?
+    if(NOT BLAS_LIBRARIES)
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      sgemm
+      ""
+      "acml"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+    endif()
+
+    # Apple BLAS library?
+    if(NOT BLAS_LIBRARIES)
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      sgemm
+      ""
+      "Accelerate"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+    endif()
+
+    if ( NOT BLAS_LIBRARIES )
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      sgemm
+      ""
+      "vecLib"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+    endif ( NOT BLAS_LIBRARIES )
+
+    # Generic BLAS library?
+    # This configuration *must* be the last try as this library is notably slow.
+    if ( NOT BLAS_LIBRARIES )
+      check_fortran_libraries(
+      BLAS_DEFINITIONS
+      BLAS_LIBRARIES
+      BLAS
+      sgemm
+      ""
+      "blas"
+      "${CGAL_TAUCS_LIBRARIES_DIR} ENV BLAS_LIB_DIR"
+      )
+    endif()
+
+  if(BLAS_LIBRARIES_DIR OR BLAS_LIBRARIES)
+    set(BLAS_FOUND TRUE)
+  else()
+    set(BLAS_FOUND FALSE)
+  endif()
+
+  if(NOT BLAS_FIND_QUIETLY)
+    if(BLAS_FOUND)
+      message(STATUS "A library with BLAS API found.")
+    else(BLAS_FOUND)
+      if(BLAS_FIND_REQUIRED)
+        message(FATAL_ERROR "A required library with BLAS API not found. Please specify library location.")
+      else()
+        message(STATUS "A library with BLAS API not found. Please specify library location.")
+      endif()
+    endif(BLAS_FOUND)
+  endif(NOT BLAS_FIND_QUIETLY)
+
+  # Add variables to cache
+  set( BLAS_INCLUDE_DIR   "${BLAS_INCLUDE_DIR}"
+                          CACHE PATH "Directories containing the BLAS header files" FORCE )
+  set( BLAS_DEFINITIONS   "${BLAS_DEFINITIONS}"
+                          CACHE STRING "Compilation options to use BLAS" FORCE )
+  set( BLAS_LINKER_FLAGS  "${BLAS_LINKER_FLAGS}"
+                          CACHE STRING "Linker flags to use BLAS" FORCE )
+  set( BLAS_LIBRARIES     "${BLAS_LIBRARIES}"
+                          CACHE FILEPATH "BLAS libraries name" FORCE )
+  set( BLAS_LIBRARIES_DIR "${BLAS_LIBRARIES_DIR}"
+                          CACHE PATH "Directories containing the BLAS libraries" FORCE )
+
+  #message("DEBUG: BLAS_INCLUDE_DIR = ${BLAS_INCLUDE_DIR}")
+  #message("DEBUG: BLAS_DEFINITIONS = ${BLAS_DEFINITIONS}")
+  #message("DEBUG: BLAS_LINKER_FLAGS = ${BLAS_LINKER_FLAGS}")
+  #message("DEBUG: BLAS_LIBRARIES = ${BLAS_LIBRARIES}")
+  #message("DEBUG: BLAS_LIBRARIES_DIR = ${BLAS_LIBRARIES_DIR}")
+  #message("DEBUG: BLAS_FOUND = ${BLAS_FOUND}")
+
+endif(BLAS_LIBRARIES_DIR OR BLAS_LIBRARIES)

+ 49 - 0
catkin_ws/src/snowboy_wakeup/include/hotword_detector.h

@@ -0,0 +1,49 @@
+#ifndef SNOWBOY_ROS_HOTWORD_DETECTOR_H_
+#define SNOWBOY_ROS_HOTWORD_DETECTOR_H_
+
+#include <snowboy/include/snowboy-detect.h>
+
+namespace snowboy_wakeup
+{
+    //!
+    //! \brief The HotwordDetector class wraps Snowboy detect so we can use C++ 11
+    //!
+    class HotwordDetector
+    {
+        public:
+            HotwordDetector();
+            ~HotwordDetector();
+
+            //!
+            //! \brief initialize Initializes the Snowbody
+            //! \param [in]  resource_filename   Filename of resource file.
+            //! \param [in]  model_str           A string of multiple hotword models,
+            //!
+            void initialize(const char* resource_filename, const char* model_filename);
+
+            //!
+            //! \brief configure Configure the detector on runtime
+            //! \param sensitivity Hotword sensitivity
+            //! \param audio_gain Fixed gain to the input audio.
+            //! \return True if success, False otherwise
+            //!
+            bool configure(double sensitivity, double audio_gain);
+
+            //!
+            //! \brief runDetection Runs hotword detection of Snowboy, see Snowboy API for more docs
+            //! \param data Small chunk of data to be detected
+            //! \param array_length Length of the data array.
+            //! \return -3 not initialized, -2 Silence, -1 Error, 0 No event, 1 Hotword triggered
+            //!
+            int runDetection(const int16_t* const data, const int array_length);
+
+        private:
+            //!
+            //! \brief detector_ Instance of Snowboy detect
+            //!
+            snowboy::SnowboyDetect* detector_;
+    };
+}// namespace snowboy_wakeup
+
+#endif  // SNOWBOY_ROS_HOTWORD_DETECTOR_H_
+

+ 23 - 0
catkin_ws/src/snowboy_wakeup/launch/snowboy_wakeup.launch

@@ -0,0 +1,23 @@
+<launch>
+    <arg name="ASR_Topic" default="/voice_system/asr_topic" />
+    <arg name="AUDIO_Topic" default="/voice_system/audio_data" />
+
+    <node name="audio_capture" pkg="audio_capture" type="audio_capture" output="screen" required="true">
+        <param name="format" value="wave" />
+        <param name="channels" value="1" />
+        <param name="depth" value="16" />
+        <param name="sample_rate" value="16000" />
+
+        <remap from="audio" to="$(arg AUDIO_Topic)" />
+    </node>
+
+    <node pkg="snowboy_wakeup" type="hotword_detector_node" name="snowboy_wakeup" respawn="true">
+        <param name="resource_filename" value="$(find snowboy_wakeup)/resources/common.res" />
+        <param name="model_filename" value="$(find snowboy_wakeup)/resources/snowboy.umdl $(find snowboy_wakeup)/resources/corvin.pmdl" />
+
+        <param name="sensitivity_str" value="0.7" type="str" />
+        <param name="audio_gain" value="1.0" />
+        <param name="asr_topic" value="$(arg ASR_Topic)" />
+        <param name="audio_topic" value="$(arg AUDIO_Topic)" />
+    </node>
+</launch>

+ 16 - 0
catkin_ws/src/snowboy_wakeup/package.xml

@@ -0,0 +1,16 @@
+<?xml version="1.0"?>
+<package>
+  <name>snowboy_wakeup</name>
+  <version>0.0.0</version>
+  <description>snowboy hotword detector</description>
+
+  <maintainer email="corvin_zhang@corvin.cn">corvin</maintainer>
+
+  <license>MIT</license>
+
+  <buildtool_depend>catkin</buildtool_depend>
+
+  <build_depend>libblas-dev</build_depend>
+  <build_depend>audio_common_msgs</build_depend>
+  <run_depend>audio_common_msgs</run_depend>
+</package>

二進制
catkin_ws/src/snowboy_wakeup/resources/common.res


二進制
catkin_ws/src/snowboy_wakeup/resources/corvin.pmdl


二進制
catkin_ws/src/snowboy_wakeup/resources/ding.wav


二進制
catkin_ws/src/snowboy_wakeup/resources/dong.wav


二進制
catkin_ws/src/snowboy_wakeup/resources/snowboy.umdl


+ 59 - 0
catkin_ws/src/snowboy_wakeup/src/hotword_detector.cpp

@@ -0,0 +1,59 @@
+#define _GLIBCXX_USE_CXX11_ABI 0
+#include <hotword_detector.h>
+#include <sstream>
+
+namespace snowboy_wakeup
+{
+    HotwordDetector::HotwordDetector() : detector_(0)
+    {
+    }
+
+    void HotwordDetector::initialize(const char* resource_filename, const char* model_filename)
+    {
+        // Delete detector if we already had one
+        if (detector_)
+        {
+            delete detector_;
+        }
+
+        // We need to use cpp98 therefore we cannot pass std::strings
+        std::string resource_filename_cpp98(resource_filename);
+        std::string model_filename_cpp98(model_filename);
+
+        detector_ = new snowboy::SnowboyDetect(resource_filename_cpp98, model_filename_cpp98);
+    }
+
+    bool HotwordDetector::configure(double sensitivity, double audio_gain)
+    {
+        // Return false if detector not initialized
+        if (!detector_)
+        {
+            return false;
+        }
+
+        std::stringstream sensitivity_ss; sensitivity_ss << sensitivity;
+
+        detector_->SetAudioGain(audio_gain);
+        detector_->SetSensitivity(sensitivity_ss.str());
+
+        return true;
+    }
+
+    HotwordDetector::~HotwordDetector()
+    {
+        if (detector_)
+        {
+            delete detector_;
+        }
+    }
+
+    int HotwordDetector::runDetection(const int16_t* const data, const int array_length)
+    {
+        if (!detector_)
+        {
+            return -3;
+        }
+        return detector_->RunDetection(data, array_length);
+    }
+}
+

+ 149 - 0
catkin_ws/src/snowboy_wakeup/src/hotword_detector_node.cpp

@@ -0,0 +1,149 @@
+#include <ros/ros.h>
+#include <std_msgs/Int32.h>
+#include <snowboy_wakeup/SnowboyReconfigureConfig.h>
+#include <audio_common_msgs/AudioData.h>
+#include <dynamic_reconfigure/server.h>
+#include <hotword_detector.h>
+
+
+namespace snowboy_wakeup
+{
+    //!
+    //! \brief The HotwordDetectorNode class Wraps the C++ 11 Snowboy detector in a ROS node
+    //!
+    class HotwordDetectorNode
+    {
+        public:
+            HotwordDetectorNode():nh_(""),nh_p_("~")
+            {}
+
+            bool initialize()
+            {
+                std::string resource_filename;
+                if (!nh_p_.getParam("resource_filename", resource_filename))
+                {
+                    ROS_ERROR("Mandatory parameter 'common.res' not present on the parameter server");
+                    return false;
+                }
+
+                std::string model_filename;
+                if (!nh_p_.getParam("model_filename", model_filename))
+                {
+                    ROS_ERROR("Mandatory parameter 'model_filename' not present on the parameter server");
+                    return false;
+                }
+
+                std::string asr_topic;
+                if (!nh_p_.getParam("asr_topic", asr_topic))
+                {
+                    ROS_ERROR("Mandatory parameter 'asr_topic' not present on the parameter server");
+                    return false;
+                }
+
+                std::string audio_topic;
+                if (!nh_p_.getParam("audio_topic", audio_topic))
+                {
+                    ROS_ERROR("Mandatory parameter 'audio_topic' not present on the parameter server");
+                    return false;
+                }
+
+                audio_sub_ = nh_.subscribe(audio_topic, 1000, &HotwordDetectorNode::audioCallback, this);
+                hotword_pub_ = nh_.advertise<std_msgs::Int32>(asr_topic, 1);
+
+                detector_.initialize(resource_filename.c_str(), model_filename.c_str());
+                dynamic_reconfigure_server_.setCallback(boost::bind(&HotwordDetectorNode::reconfigureCallback, this, _1, _2));
+
+                return true;
+            }
+
+        private:
+            ros::NodeHandle nh_;
+
+            //!
+            //! \brief nh_p_ Local nodehandle for parameters
+            //!
+            ros::NodeHandle nh_p_;
+
+            ros::Subscriber audio_sub_;
+            ros::Publisher hotword_pub_;
+
+            //!
+            //! \brief dynamic_reconfigure_server_ In order to online tune the sensitivity and audio gain
+            //!
+            dynamic_reconfigure::Server<SnowboyReconfigureConfig> dynamic_reconfigure_server_;
+
+            //!
+            //! \brief detector_ C++ 11 Wrapped Snowboy detect
+            //!
+            HotwordDetector detector_;
+
+            //!
+            //! \brief reconfigureCallback Reconfigure update for sensitiviy and audio level
+            //! \param cfg The updated config
+            //!
+            void reconfigureCallback(SnowboyReconfigureConfig cfg, uint32_t)
+            {
+                detector_.configure(cfg.sensitivity, cfg.audio_gain);
+                ROS_INFO("SnowboyROS (Re)Configured");
+            }
+
+            //!
+            //! \brief audioCallback Audio stream callback
+            //! \param msg The audo data
+            //!
+            void audioCallback(const audio_common_msgs::AudioDataConstPtr& msg)
+            {
+                if (msg->data.size() != 0)
+                {
+                    if ( msg->data.size() % 2 )
+                    {
+                        ROS_ERROR("Not an even number of bytes in this message!");
+                    }
+
+                    int16_t sample_array[msg->data.size()/2];
+                    for ( size_t i = 0; i < msg->data.size(); i+=2 )
+                    {
+                        sample_array[i/2] = ((int16_t) (msg->data[i+1]) << 8) + (int16_t) (msg->data[i]);
+                    }
+
+                    std_msgs::Int32 hotword_msg;
+                    int result = detector_.runDetection( &sample_array[0], msg->data.size()/2);                    
+                    if (result == 1)
+                    {
+                        ROS_INFO("Hotword 1 detected!");
+                        hotword_msg.data = result;
+                        hotword_pub_.publish(hotword_msg);
+                        system("play -q --multi-threaded ~/Music/ding.wav");
+                    }
+                    else if (result == -3)
+                    {
+                        ROS_ERROR("Hotword detector not initialized");
+                    }
+                    else if (result == -1)
+                    {
+                        ROS_ERROR("Snowboy error");
+                    }
+                }
+            }
+    };
+
+}
+
+int main(int argc, char** argv)
+{
+    ros::init(argc, argv, "snowboy_wakeup_node");
+    snowboy_wakeup::HotwordDetectorNode ros_hotword_detector_node;
+
+    if (ros_hotword_detector_node.initialize())
+    {
+        ros::spin();
+    }
+    else
+    {
+        ROS_ERROR("Failed to initialize snowboy_node");
+        return 1;
+    }
+
+    return 0;
+}
+

+ 0 - 0
Makefile → example/duer_os/Makefile


+ 0 - 0
comm.mk → example/duer_os/comm.mk


+ 0 - 0
include/libduer-device/include/baidu_json.h → example/duer_os/include/libduer-device/include/baidu_json.h


+ 0 - 0
include/libduer-device/include/device_vad.h → example/duer_os/include/libduer-device/include/device_vad.h


+ 0 - 0
include/libduer-device/include/lightduer_adapter.h → example/duer_os/include/libduer-device/include/lightduer_adapter.h


+ 0 - 0
include/libduer-device/include/lightduer_aes.h → example/duer_os/include/libduer-device/include/lightduer_aes.h


+ 0 - 0
include/libduer-device/include/lightduer_bind_device.h → example/duer_os/include/libduer-device/include/lightduer_bind_device.h


+ 0 - 0
include/libduer-device/include/lightduer_bitmap.h → example/duer_os/include/libduer-device/include/lightduer_bitmap.h


+ 0 - 0
include/libduer-device/include/lightduer_ca.h → example/duer_os/include/libduer-device/include/lightduer_ca.h


+ 0 - 0
include/libduer-device/include/lightduer_ca_conf.h → example/duer_os/include/libduer-device/include/lightduer_ca_conf.h


+ 0 - 0
include/libduer-device/include/lightduer_coap.h → example/duer_os/include/libduer-device/include/lightduer_coap.h


+ 0 - 0
include/libduer-device/include/lightduer_coap_defs.h → example/duer_os/include/libduer-device/include/lightduer_coap_defs.h


+ 0 - 0
include/libduer-device/include/lightduer_coap_ep.h → example/duer_os/include/libduer-device/include/lightduer_coap_ep.h


+ 0 - 0
include/libduer-device/include/lightduer_coap_trace.h → example/duer_os/include/libduer-device/include/lightduer_coap_trace.h


+ 0 - 0
include/libduer-device/include/lightduer_connagent.h → example/duer_os/include/libduer-device/include/lightduer_connagent.h


+ 0 - 0
include/libduer-device/include/lightduer_data_cache.h → example/duer_os/include/libduer-device/include/lightduer_data_cache.h


+ 0 - 0
include/libduer-device/include/lightduer_dcs.h → example/duer_os/include/libduer-device/include/lightduer_dcs.h


+ 0 - 0
include/libduer-device/include/lightduer_dcs_alert.h → example/duer_os/include/libduer-device/include/lightduer_dcs_alert.h


+ 0 - 0
include/libduer-device/include/lightduer_dcs_local.h → example/duer_os/include/libduer-device/include/lightduer_dcs_local.h


+ 0 - 0
include/libduer-device/include/lightduer_dcs_router.h → example/duer_os/include/libduer-device/include/lightduer_dcs_router.h


+ 0 - 0
include/libduer-device/include/lightduer_debug.h → example/duer_os/include/libduer-device/include/lightduer_debug.h


+ 0 - 0
include/libduer-device/include/lightduer_dev_info.h → example/duer_os/include/libduer-device/include/lightduer_dev_info.h


+ 0 - 0
include/libduer-device/include/lightduer_ds_log.h → example/duer_os/include/libduer-device/include/lightduer_ds_log.h


+ 0 - 0
include/libduer-device/include/lightduer_ds_log_audio.h → example/duer_os/include/libduer-device/include/lightduer_ds_log_audio.h


+ 0 - 0
include/libduer-device/include/lightduer_ds_log_audio_player.h → example/duer_os/include/libduer-device/include/lightduer_ds_log_audio_player.h


+ 0 - 0
include/libduer-device/include/lightduer_ds_log_bind.h → example/duer_os/include/libduer-device/include/lightduer_ds_log_bind.h


Some files were not shown because too many files changed in this diff