您是否希望将人工智能(AI)融入到下一个产品设计中?机器学习(ML)和深度学习(DL)怎么样?您可以先了解一下这三个概念的区别、每种模型的工作原理,以及目前能让您快速将这些技术集成到设计中的解决方案。

请花一点时间填写下面的表格,并立即获得本次网络研讨会的录音。
 封面页

网络研讨会录音

边缘人工智能:嵌入式系统中的机器学习

Oct 07, 2021 | Length: 39:05

您是否希望将人工智能(AI)融入到下一个产品设计中?机器学习(ML)和深度学习(DL)怎么样?您可以先了解一下这三个概念的区别、每种模型的工作原理,以及目前能让您快速将这些技术集成到设计中的解决方案。

Whether you are just beginning to learn about these powerful technologies, or you are planning a specific project, this recorded webinar will accelerate your journey into AI, ML and DL. You will also learn how Digi embedded development solutions – the Digi XBee® ecosystem and the Digi ConnectCore® embedded system on module platform – can support your goals.

与 Digi 联系

想进一步了解 Digi 如何为您提供帮助?以下是接下来的一些步骤:

网络研讨会后续问答

Thank you again for attending our session on AI at the Edge: Machine Learning in Embedded Systems. Here are the questions that followed the presentation and their answers. If you have additional questions, be sure to reach out.

What additional resources, processor, memory, etc., are required to implement machine learning in an embedded system?

It really depends. Machine learning applications cover a whole spectrum. As we've seen, there are simpler applications, such as that example of monitoring a few sensors for changes in vibration patterns as a use case for predictive maintenance services for that construction machine. For this use case, a small microcontroller with very little memory can be sufficient. And there are high-end applications, such as detecting objects in high-resolution video streams that obviously need a lot of compute power and memory bandwidth to shuffle the data.

A lot of machine learning applications today originate from cloud development, where you have a lot of compute resources available. Developers didn't take care about compute resources there, which is in high contrast to embedded devices. And now with the aim or the desire to move that functionality to the edge where you do not have all that compute performance available, that's a tricky task. With machine learning at the edge, more attention needs to be paid to the model use and the optimizations for resource-constrained embedded devices.

And there's vendors like our partner Au-Zone, which we have seen in the demo as well, who are experts on this and they provide an embedded inference engine and model optimization tools to prepare it for running on these constrained embedded devices with low memory footprints and fast inference times even when little compute resources are available.

And we saw that example of voice recognition. Just to highlight again, we are going to provide such a solution with our ConnectCore SoM solution offering, and that is optimized for embedded devices. So you don't need a fancy neural network processing unit, which is also costly. You can run that application, voice recognition application supporting thousands of words and vocabulary on a single Cortex-A core with less than 50% load while you might need an NPU if you do the same thing with a non-optimized kind of building your own open-source machine learning framework.

Is it possible to have deep learning for text data like processing poetry in order to identify different genres?

It's certainly possible to process text and do classification of text elements. So that's definitely possible with Machine Learning. And there are plenty of use cases for that, for example, your spam filters processing text and emails and classifying it to spam or non-spam. It's not quite poetry in those emails, but it's related, I guess.

How does artificial intelligence and Machine Learning in a device impact security?

There are security threats targeting machine learning applications if that's the question. For example, attackers try to modify the inputs into the machine learning model with a certain technique to mislead the model in a way that it misclassifies an object, for example, or could be even a road sign in a traffic application. And then it also would give out, with the right manipulations, a high confidence factor for these and this certainly is a security issue and also a safety issue.

Such an attack would be called an adversarial example attack and there are methods to harden the model against such attacks, which can be applied during the model training and development. And other security issues with machine learning includes model theft or modeling version attacks. And Digi is providing the countermeasures available from NXP and the eIQ Machine Learning tools to address some of these machine learning-specific security issues.

But also the general system security is important and other security features such as secure boot, encrypted file system, protected ports, and tamper detection need to be enabled as well. With our security framework we're providing as part of ConnectCore, the Digi TrustFence, that is a complete security framework available with Digi ConnectCore SoM. And all the features I just mentioned are fully integrated into the bootloader, the kernel, the root file system, ready to use without becoming a security expert or spending many weeks to enable them. So they work on all the hardware and software layers.

Will the presentation be recorded for later viewing?

Yes, absolutely. The recording will be available for later playback on the BrightTALK platform here. And we will also post the link on www.digi.com, our website. In the resources section, you see a webinar section there. And this is where we post all the webinars for later watching.

How do you validate the accuracy of the machine learning model?

That is done during the training phase. So usually, you have a big set of data to train the model. And then, you set aside different data to do the actual testing and to verify the accuracy of the model. And once you're happy with the accuracy, you're done. But if you're not happy with the accuracy, you need more training data, feed that into a model, train it more and then test again with different data and iterate that process till you're happy with the accuracy. Just on a high level.

There's another question on examples available for learning many channels of low-speed signals.

Not sure if I got the question right. But there are two ways. You can build a model from scratch and do it all on your own, and you need lots and lots of data. But typically, you would use a pre-trained model and then apply something called transfer learning. There are available pre-trained models out there for image recognition, for voice recognition, for text, for many other things, and you would need to find a use case or a model that covers your use case. And then you would apply transfer learning to tweak or modify that model in a way so it's actually serving your exact use case.

How was the latency of the wake word measured on Cortex-M core? Is it possible to configure a wake word? Does it require additional learning?

So, in that scenario, the wake word can be configured. So you can define your own wake words and learn those in that model. So you would actually put your input in terms of what commands you want to learn and then record that, and apply different speaking, and then the engine would recognize those words. And also transfer the model to make it run on the embedded device for the Cortex-M core and to optimize it to run efficiently on that engine. The latency of that wake word was not terribly important, I'd say. I mean, this is humans interacting with the machine. So if that's taking a few more milliseconds, that is really not an issue. So it wasn't required to be a real low latency. But still, to be snappy in terms of people using it. But latency was not terribly important. And it was below a second so it was seamless to switch on the machine.

Do you have any examples of using FPGAs for machine learning in the embedded area? How is this different in terms of requirements and performance?

Sorry, can't answer that one. I don't have experience on the FPGA side. I'm sure that can be used to kind of mimic a neural network processing engine. I'm sure there's some IP out there to run that functionality in an FPGA. But you have all those cores available. And today's embedded SoCs, system on chips, you have often a GPU you're not using. You have multiple Cortex-A cores. You often have a companion Cortex-M core separate from that. So with these highly integrated SoCs, you have plenty of cores and plenty of options in those SoCs. So using an external FPGA would just be adding cost and design complexity. But if it's required, I'm sure there's options to run neural network accelerators in an FPGA.

观看我们的嵌入式设计视频
了解如何以正确的方式应对嵌入式设计挑战

相关内容

IoT 与供应链:机器学习如何缓解瓶颈问题 IoT 与供应链:机器学习如何缓解瓶颈问题 IoT 事实上,物流跟踪是最普遍的物联网技术之一。 阅读博客 美国电动汽车基础设施的发展和增长情况如何? 美国电动汽车基础设施的发展和增长情况如何? 在美国,公共倡议被用来推动可持续发展工作,包括建设可持续城市。但未来... 阅读博客 Digi 嵌入式 Android 和 Android 扩展工具 Digi 嵌入式 Android 和 Android 扩展工具 如今,许多嵌入式开发人员都选择 Android 操作系统来开发移动和工业应用。 观看视频 嵌入式系统中的电源管理技术 嵌入式系统中的电源管理技术 在嵌入式系统设计中利用关键的电源管理技术可以带来巨大的好处,从电池寿命、电源管理和电源管理技术的应用等方面都是如此。 阅读博客 Digi ConnectCore 8M Nano 开发套件开箱和入门 Digi ConnectCore 8M Nano 开发套件开箱和入门 Digi ConnectCore® 8M Nano 片上系统模块是快速制作嵌入式产品原型的绝佳开发平台。 观看视频 Digi ConnectCore 8M 迷你型 Digi ConnectCore 8M 迷你型 嵌入式系统模块基于 NXP i.MX 8M Mini 处理器,内置视频处理单元 (VPU);专为工业IoT 应用中的长寿命和可扩展性而设计。 查看产品 Digi ConnectCore SOM 解决方案 Digi ConnectCore SOM 解决方案 完全基于 NXP i.MX 应用处理器的嵌入式系统模块--专为工业IoT 应用中的长寿命和可扩展性而设计 查看 PDF 使用Digi ConnectCore 和 ByteSnap SnapUI 的机器学习演示 使用Digi ConnectCore 和 ByteSnap SnapUI 的机器学习演示 Digi International 和 ByteSnap Design 合作开发了一个有趣且具有娱乐性的海盗游戏演示... 观看视频 Digi XBee 生态系统|探索和创建无线连接所需的一切 Digi XBee 生态系统|探索和创建无线连接所需的一切 完整的Digi XBee 生态系统包括射频模块、代码库和屡获殊荣的工具套件Digi XBee Tools。 查看产品 Digi ConnectCore 基于 i.MX 的 SOM 可简化并加快开发过程 Digi ConnectCore 基于 i.MX 的 SOM 可简化并加快开发过程 开发IoT 产品极具挑战性,因此,很大一部分嵌入式设计项目都以失败告终。 录制的网络研讨会 Digi ConnectCore 8M Nano:开发人员资源、安全性、可扩展性 Digi ConnectCore 8M Nano:开发人员资源、安全性、可扩展性 Digi International 最近宣布推出Digi ConnectCore 8M Nano 开发套件。Digi ConnectCore® 8M... 阅读博客 Digi ConnectCore 8M 纳米 Digi ConnectCore 8M 纳米 基于 NXP i.MX 8M Nano 处理器的嵌入式系统模块;专为工业IoT 应用中的长寿命和可扩展性而设计 查看产品 建造与购买:选择导航 建造与购买:选择导航 在本白皮书中,我们将帮助您评估优化 IP 的最佳方式,并做出正确的构建与购买决策,以实现您的目标。 查看 PDF

有问题?立即联系 Digi 团队成员!