faultydnn

This shows you the differences between two versions of the page.

Both sides previous revision Previous revision | |||

faultydnn [2017/12/01 22:24] francoislp |
faultydnn [2017/12/01 22:27] (current) francoislp |
||
---|---|---|---|

Line 7: | Line 7: | ||

An interesting objective for future research is therefore to identify ways to add some redundancy to the inference computations so as to make the robustness independent of the performance target. | An interesting objective for future research is therefore to identify ways to add some redundancy to the inference computations so as to make the robustness independent of the performance target. | ||

- | {{:no_pooling123_n_vs_p_bis.jpg?nolink&450 |}} | + | {{::no_pooling123_n_vs_p_bis_small.png?nolink |}} |

This figures show the //fault-tolerance efficiency// of some CNN models, which gives a measure of the fraction of computations that are spent for "useful" computations versus computations that are needed only to provide robustness. For example, if the efficiency is 1, this means that the amount of computation needed is the same as for a reliable implementation, and if it is 0.8, it means that a reliable implementation would only need to perform 80% of the computations. Each curve corresponds to a constant performance target (in this case we are talking about classification error), and the parameter "p" is the probability that a neuron's output is replaced with a random value. This figure shows us two things: 1) for these "vanilla" CNNs, there is a threshold effect on the amount of faults that can be tolerated, i.e. efficiency is either close to 1 if the faultiness is below some threshold, and then quickly goes to 0 when we pass that threshold, and 2) the value of this threshold depends on the performance target that we set. | This figures show the //fault-tolerance efficiency// of some CNN models, which gives a measure of the fraction of computations that are spent for "useful" computations versus computations that are needed only to provide robustness. For example, if the efficiency is 1, this means that the amount of computation needed is the same as for a reliable implementation, and if it is 0.8, it means that a reliable implementation would only need to perform 80% of the computations. Each curve corresponds to a constant performance target (in this case we are talking about classification error), and the parameter "p" is the probability that a neuron's output is replaced with a random value. This figure shows us two things: 1) for these "vanilla" CNNs, there is a threshold effect on the amount of faults that can be tolerated, i.e. efficiency is either close to 1 if the faultiness is below some threshold, and then quickly goes to 0 when we pass that threshold, and 2) the value of this threshold depends on the performance target that we set. | ||

[[projects|Back to projects]] | [[projects|Back to projects]] |

faultydnn.txt ยท Last modified: 2017/12/01 22:27 by francoislp