FIST: A Framework to Interleave Spiking Neural Networks on CGRAs
| dc.contributor.author | Tuan Ngyen | |
| dc.contributor.author | Syed M. A. H. Jafri | |
| dc.contributor.author | Masoud Daneshtalab | |
| dc.contributor.author | Ahmed Hemani | |
| dc.contributor.author | Sergei Dytckov | |
| dc.contributor.author | Juha Plosila | |
| dc.contributor.author | Hannu Tenhunen | |
| dc.contributor.organization | fi=ohjelmistotekniikka|en=Software Engineering| | |
| dc.contributor.organization | fi=sulautettu elektroniikka|en=Embedded Electronics| | |
| dc.contributor.organization | fi=tietoliikennetekniikka|en=Communication Systems| | |
| dc.contributor.organization-code | 1.2.246.10.2458963.20.20754768032 | |
| dc.contributor.organization-code | 2606801 | |
| dc.contributor.organization-code | 2606804 | |
| dc.converis.publication-id | 3960200 | |
| dc.converis.url | https://research.utu.fi/converis/portal/Publication/3960200 | |
| dc.date.accessioned | 2022-10-28T14:15:21Z | |
| dc.date.available | 2022-10-28T14:15:21Z | |
| dc.description.abstract | <div> Coarse Grained Reconfigurable Architectures</div> <div> (CGRAs) are emerging as enabling platforms to meet the high</div> <div> performance demanded by modern embedded applications. In</div> <div> many application domains (e.g. robotics and cognitive embedded</div> <div> systems), the CGRAs are required to simultaneously host</div> <div> processing (e.g. Audio/video acquisition) and estimation (e.g.</div> <div> audio/video/image recognition) tasks. Recent works have revealed</div> <div> that the efficiency and scalability of the estimation algorithms</div> <div> can be significantly improved by using neural networks.</div> <div> However, existing CGRAs commonly employ homogeneous</div> <div> processing resources for both the tasks. To realize the best of</div> <div> both the worlds (conventional processing and neural networks),</div> <div> we present FIST. FIST allows the processing elements and the</div> <div> network to dynamically morph into either conventional CGRA</div> <div> or a neural network, depending on the hosted application. We</div> <div> have chosen the DRRA as a vehicle to study the feasibility and</div> <div> overheads of our approach. Synthesis results reveal that the</div> <div> proposed enhancements incur negligible overheads (4.4% area</div> <div> and 9.1% power) compared to the original DRRA cell.</div> | |
| dc.format.pagerange | 751 | |
| dc.format.pagerange | 758 | |
| dc.identifier.isbn | 978-1-4799-8490-9 | |
| dc.identifier.olddbid | 187205 | |
| dc.identifier.oldhandle | 10024/170299 | |
| dc.identifier.uri | https://www.utupub.fi/handle/11111/42747 | |
| dc.identifier.urn | URN:NBN:fi-fe2021042715453 | |
| dc.language.iso | en | |
| dc.okm.affiliatedauthor | Nguyen, Tuan | |
| dc.okm.affiliatedauthor | Jafri, Syed | |
| dc.okm.affiliatedauthor | Daneshtalab, Masoud | |
| dc.okm.affiliatedauthor | Dytckov, Sergei | |
| dc.okm.affiliatedauthor | Plosila, Juha | |
| dc.okm.affiliatedauthor | Tenhunen, Hannu | |
| dc.okm.discipline | 113 Computer and information sciences | en_GB |
| dc.okm.discipline | 113 Tietojenkäsittely ja informaatiotieteet | fi_FI |
| dc.okm.internationalcopublication | international co-publication | |
| dc.okm.internationality | International publication | |
| dc.okm.type | A4 Conference Article | |
| dc.publisher.country | United States | en_GB |
| dc.publisher.country | Yhdysvallat (USA) | fi_FI |
| dc.publisher.country-code | US | |
| dc.relation.conference | Euromicro international conference on parallel, distributed and network-based processing | |
| dc.relation.doi | 10.1109/PDP.2015.60 | |
| dc.source.identifier | https://www.utupub.fi/handle/10024/170299 | |
| dc.title | FIST: A Framework to Interleave Spiking Neural Networks on CGRAs | |
| dc.title.book | Parallel, Distributed and Network-Based Processing (PDP), 2015 23rd Euromicro International Conference on | |
| dc.year.issued | 2015 |
Tiedostot
1 - 1 / 1
Ladataan...
- Name:
- tuanedgedetection.pdf
- Size:
- 396.39 KB
- Format:
- Adobe Portable Document Format
- Description:
- pre-print