zoo.literature
BinaryAlexNet¶
larq_zoo.literature.BinaryAlexNet(
*,
input_shape=None,
input_tensor=None,
weights="imagenet",
include_top=True,
num_classes=1000
)
Instantiates the BinaryAlexNet architecture.
Optionally loads weights pre-trained on ImageNet.
Model Summary
+binary_alexnet stats----------------------------------------------------------------------------------------------+
| Layer Input prec. Outputs # 1-bit # 32-bit Memory 1-bit MACs 32-bit MACs |
| (bit) x 1 x 1 (kB) |
+------------------------------------------------------------------------------------------------------------------+
| input_1 - ((None, 224, 224, 3),) 0 0 0 ? ? |
| quant_conv2d - (-1, 56, 56, 64) 23232 0 2.84 0 72855552 |
| max_pooling2d - (-1, 27, 27, 64) 0 0 0 0 0 |
| batch_normalization - (-1, 27, 27, 64) 0 128 0.50 0 0 |
| quant_conv2d_1 1 (-1, 27, 27, 192) 307200 0 37.50 223948800 0 |
| max_pooling2d_1 - (-1, 13, 13, 192) 0 0 0 0 0 |
| batch_normalization_1 - (-1, 13, 13, 192) 0 384 1.50 0 0 |
| quant_conv2d_2 1 (-1, 13, 13, 384) 663552 0 81.00 112140288 0 |
| batch_normalization_2 - (-1, 13, 13, 384) 0 768 3.00 0 0 |
| quant_conv2d_3 1 (-1, 13, 13, 384) 1327104 0 162.00 224280576 0 |
| batch_normalization_3 - (-1, 13, 13, 384) 0 768 3.00 0 0 |
| quant_conv2d_4 1 (-1, 13, 13, 256) 884736 0 108.00 149520384 0 |
| max_pooling2d_2 - (-1, 6, 6, 256) 0 0 0 0 0 |
| batch_normalization_4 - (-1, 6, 6, 256) 0 512 2.00 0 0 |
| flatten - (-1, 9216) 0 0 0 0 0 |
| quant_dense 1 (-1, 4096) 37748736 0 4608.00 37748736 0 |
| batch_normalization_5 - (-1, 4096) 0 8192 32.00 0 0 |
| quant_dense_1 1 (-1, 4096) 16777216 0 2048.00 16777216 0 |
| batch_normalization_6 - (-1, 4096) 0 8192 32.00 0 0 |
| quant_dense_2 1 (-1, 1000) 4096000 0 500.00 4096000 0 |
| batch_normalization_7 - (-1, 1000) 0 2000 7.81 0 0 |
| activation - (-1, 1000) 0 0 0 ? ? |
+------------------------------------------------------------------------------------------------------------------+
| Total 61827776 20944 7629.15 768512000 72855552 |
+------------------------------------------------------------------------------------------------------------------+
+binary_alexnet summary------------------------+
| Total params 61.8 M |
| Trainable params 61.8 M |
| Non-trainable params 20.9 k |
| Model size 7.45 MiB |
| Model size (8-bit FP weights) 7.39 MiB |
| Float-32 Equivalent 235.93 MiB |
| Compression Ratio of Memory 0.03 |
| Number of MACs 841 M |
| Ratio of MACs that are binarized 0.9134 |
+----------------------------------------------+
ImageNet Metrics
Top-1 Accuracy | Top-5 Accuracy | Parameters | Memory |
---|---|---|---|
36.30 % | 61.53 % | 61 859 192 | 7.49 MB |
Arguments
- input_shape
Optional[Sequence[Optional[int]]]
: Optional shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels. - input_tensor
Optional[tf.Tensor]
: optional Keras tensor (i.e. output oflayers.Input()
) to use as image input for the model. - weights
Optional[str]
: one ofNone
(random initialization), "imagenet" (pre-training on ImageNet), or the path to the weights file to be loaded. - include_top
bool
: whether to include the fully-connected layer at the top of the network. - num_classes
int
: optional number of classes to classify images into, only to be specified ifinclude_top
is True, and if noweights
argument is specified.
Returns
A Keras model instance.
Raises
- ValueError: in case of invalid argument for
weights
, or invalid input shape.
BiRealNet¶
larq_zoo.literature.BiRealNet(
*,
input_shape=None,
input_tensor=None,
weights="imagenet",
include_top=True,
num_classes=1000
)
Instantiates the Bi-Real Net architecture.
Optionally loads weights pre-trained on ImageNet.
Model Summary
+birealnet18 stats--------------------------------------------------------------------------------------------------+
| Layer Input prec. Outputs # 1-bit # 32-bit Memory 1-bit MACs 32-bit MACs |
| (bit) x 1 x 1 (kB) |
+-------------------------------------------------------------------------------------------------------------------+
| input_1 - ((None, 224, 224, 3),) 0 0 0 ? ? |
| conv2d - (-1, 112, 112, 64) 0 9408 36.75 0 118013952 |
| batch_normalization - (-1, 112, 112, 64) 0 128 0.50 0 0 |
| max_pooling2d - (-1, 56, 56, 64) 0 0 0 0 0 |
| quant_conv2d 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_1 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| add - (-1, 56, 56, 64) 0 0 0 ? ? |
| quant_conv2d_1 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_2 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| add_1 - (-1, 56, 56, 64) 0 0 0 ? ? |
| quant_conv2d_2 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_3 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| add_2 - (-1, 56, 56, 64) 0 0 0 ? ? |
| quant_conv2d_3 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_4 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| add_3 - (-1, 56, 56, 64) 0 0 0 ? ? |
| average_pooling2d - (-1, 28, 28, 64) 0 0 0 0 0 |
| quant_conv2d_4 1 (-1, 28, 28, 128) 73728 0 9.00 57802752 0 |
| conv2d_1 - (-1, 28, 28, 128) 0 8192 32.00 0 6422528 |
| batch_normalization_6 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| batch_normalization_5 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| add_4 - (-1, 28, 28, 128) 0 0 0 ? ? |
| quant_conv2d_5 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_7 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| add_5 - (-1, 28, 28, 128) 0 0 0 ? ? |
| quant_conv2d_6 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_8 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| add_6 - (-1, 28, 28, 128) 0 0 0 ? ? |
| quant_conv2d_7 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_9 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| add_7 - (-1, 28, 28, 128) 0 0 0 ? ? |
| average_pooling2d_1 - (-1, 14, 14, 128) 0 0 0 0 0 |
| quant_conv2d_8 1 (-1, 14, 14, 256) 294912 0 36.00 57802752 0 |
| conv2d_2 - (-1, 14, 14, 256) 0 32768 128.00 0 6422528 |
| batch_normalization_11 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| batch_normalization_10 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| add_8 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_9 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_12 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| add_9 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_10 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_13 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| add_10 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_11 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_14 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| add_11 - (-1, 14, 14, 256) 0 0 0 ? ? |
| average_pooling2d_2 - (-1, 7, 7, 256) 0 0 0 0 0 |
| quant_conv2d_12 1 (-1, 7, 7, 512) 1179648 0 144.00 57802752 0 |
| conv2d_3 - (-1, 7, 7, 512) 0 131072 512.00 0 6422528 |
| batch_normalization_16 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| batch_normalization_15 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| add_12 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_13 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_17 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| add_13 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_14 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_18 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| add_14 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_15 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_19 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| add_15 - (-1, 7, 7, 512) 0 0 0 ? ? |
| average_pooling2d_3 - (-1, 1, 1, 512) 0 0 0 0 0 |
| flatten - (-1, 512) 0 0 0 0 0 |
| dense - (-1, 1000) 0 513000 2003.91 0 512000 |
| activation - (-1, 1000) 0 0 0 ? ? |
+-------------------------------------------------------------------------------------------------------------------+
| Total 10985472 704040 4091.16 1676279808 137793536 |
+-------------------------------------------------------------------------------------------------------------------+
+birealnet18 summary--------------------------+
| Total params 11.7 M |
| Trainable params 11.7 M |
| Non-trainable params 9.6 k |
| Model size 4.00 MiB |
| Model size (8-bit FP weights) 1.98 MiB |
| Float-32 Equivalent 44.59 MiB |
| Compression Ratio of Memory 0.09 |
| Number of MACs 1.81 B |
| Ratio of MACs that are binarized 0.9240 |
+---------------------------------------------+
ImageNet Metrics
Top-1 Accuracy | Top-5 Accuracy | Parameters | Memory |
---|---|---|---|
57.47 % | 79.84 % | 11 699 112 | 4.03 MB |
Arguments
- input_shape
Optional[Sequence[Optional[int]]]
: Optional shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels. - input_tensor
Optional[tf.Tensor]
: optional Keras tensor (i.e. output oflayers.Input()
) to use as image input for the model. - weights
Optional[str]
: one ofNone
(random initialization), "imagenet" (pre-training on ImageNet), or the path to the weights file to be loaded. - include_top
bool
: whether to include the fully-connected layer at the top of the network. - num_classes
int
: optional number of classes to classify images into, only to be specified ifinclude_top
is True, and if noweights
argument is specified.
Returns
A Keras model instance.
Raises
- ValueError: in case of invalid argument for
weights
, or invalid input shape.
References
BinaryResNetE18¶
larq_zoo.literature.BinaryResNetE18(
*,
input_shape=None,
input_tensor=None,
weights="imagenet",
include_top=True,
num_classes=1000
)
Instantiates the BinaryResNetE 18 architecture.
Optionally loads weights pre-trained on ImageNet.
Model Summary
+binary_resnet_e_18 stats-------------------------------------------------------------------------------------------+
| Layer Input prec. Outputs # 1-bit # 32-bit Memory 1-bit MACs 32-bit MACs |
| (bit) x 1 x 1 (kB) |
+-------------------------------------------------------------------------------------------------------------------+
| input_1 - ((None, 224, 224, 3),) 0 0 0 ? ? |
| conv2d - (-1, 112, 112, 64) 0 9408 36.75 0 118013952 |
| batch_normalization - (-1, 112, 112, 64) 0 128 0.50 0 0 |
| activation - (-1, 112, 112, 64) 0 0 0 ? ? |
| max_pooling2d - (-1, 56, 56, 64) 0 0 0 0 0 |
| batch_normalization_1 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| quant_conv2d 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_2 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| add - (-1, 56, 56, 64) 0 0 0 ? ? |
| quant_conv2d_1 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_3 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| add_1 - (-1, 56, 56, 64) 0 0 0 ? ? |
| quant_conv2d_2 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_4 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| add_2 - (-1, 56, 56, 64) 0 0 0 ? ? |
| quant_conv2d_3 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_5 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| add_3 - (-1, 56, 56, 64) 0 0 0 ? ? |
| average_pooling2d - (-1, 28, 28, 64) 0 0 0 0 0 |
| quant_conv2d_4 1 (-1, 28, 28, 128) 73728 0 9.00 57802752 0 |
| conv2d_1 - (-1, 28, 28, 128) 0 8192 32.00 0 6422528 |
| batch_normalization_7 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| batch_normalization_6 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| add_4 - (-1, 28, 28, 128) 0 0 0 ? ? |
| quant_conv2d_5 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_8 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| add_5 - (-1, 28, 28, 128) 0 0 0 ? ? |
| quant_conv2d_6 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_9 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| add_6 - (-1, 28, 28, 128) 0 0 0 ? ? |
| quant_conv2d_7 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_10 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| add_7 - (-1, 28, 28, 128) 0 0 0 ? ? |
| average_pooling2d_1 - (-1, 14, 14, 128) 0 0 0 0 0 |
| quant_conv2d_8 1 (-1, 14, 14, 256) 294912 0 36.00 57802752 0 |
| conv2d_2 - (-1, 14, 14, 256) 0 32768 128.00 0 6422528 |
| batch_normalization_12 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| batch_normalization_11 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| add_8 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_9 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_13 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| add_9 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_10 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_14 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| add_10 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_11 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_15 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| add_11 - (-1, 14, 14, 256) 0 0 0 ? ? |
| average_pooling2d_2 - (-1, 7, 7, 256) 0 0 0 0 0 |
| quant_conv2d_12 1 (-1, 7, 7, 512) 1179648 0 144.00 57802752 0 |
| conv2d_3 - (-1, 7, 7, 512) 0 131072 512.00 0 6422528 |
| batch_normalization_17 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| batch_normalization_16 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| add_12 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_13 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_18 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| add_13 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_14 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_19 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| add_14 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_15 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_20 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| add_15 - (-1, 7, 7, 512) 0 0 0 ? ? |
| activation_1 - (-1, 7, 7, 512) 0 0 0 ? ? |
| average_pooling2d_3 - (-1, 1, 1, 512) 0 0 0 0 0 |
| flatten - (-1, 512) 0 0 0 0 0 |
| dense - (-1, 1000) 0 513000 2003.91 0 512000 |
| activation_2 - (-1, 1000) 0 0 0 ? ? |
+-------------------------------------------------------------------------------------------------------------------+
| Total 10985472 704168 4091.66 1676279808 137793536 |
+-------------------------------------------------------------------------------------------------------------------+
+binary_resnet_e_18 summary-------------------+
| Total params 11.7 M |
| Trainable params 11.7 M |
| Non-trainable params 9.73 k |
| Model size 4.00 MiB |
| Model size (8-bit FP weights) 1.98 MiB |
| Float-32 Equivalent 44.59 MiB |
| Compression Ratio of Memory 0.09 |
| Number of MACs 1.81 B |
| Ratio of MACs that are binarized 0.9240 |
+---------------------------------------------+
ImageNet Metrics
Top-1 Accuracy | Top-5 Accuracy | Parameters | Memory |
---|---|---|---|
58.32 % | 80.79 % | 11 699 368 | 4.03 MB |
Arguments
- input_shape
Optional[Sequence[Optional[int]]]
: Optional shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels. - input_tensor
Optional[tf.Tensor]
: optional Keras tensor (i.e. output oflayers.Input()
) to use as image input for the model. - weights
Optional[str]
: one ofNone
(random initialization), "imagenet" (pre-training on ImageNet), or the path to the weights file to be loaded. - include_top
bool
: whether to include the fully-connected layer at the top of the network. - num_classes
int
: optional number of classes to classify images into, only to be specified ifinclude_top
is True, and if noweights
argument is specified.
Returns
A Keras model instance.
Raises
- ValueError: in case of invalid argument for
weights
, or invalid input shape.
References
BinaryDenseNet28¶
larq_zoo.literature.BinaryDenseNet28(
*,
input_shape=None,
input_tensor=None,
weights="imagenet",
include_top=True,
num_classes=1000
)
Instantiates the BinaryDenseNet 28 architecture.
Optionally loads weights pre-trained on ImageNet.
Model Summary
+binary_densenet28 stats-------------------------------------------------------------------------------------------+
| Layer Input prec. Outputs # 1-bit # 32-bit Memory 1-bit MACs 32-bit MACs |
| (bit) x 1 x 1 (kB) |
+------------------------------------------------------------------------------------------------------------------+
| input_1 - ((None, 224, 224, 3),) 0 0 0 ? ? |
| conv2d - (-1, 112, 112, 64) 0 9408 36.75 0 118013952 |
| batch_normalization - (-1, 112, 112, 64) 0 128 0.50 0 0 |
| activation - (-1, 112, 112, 64) 0 0 0 ? ? |
| max_pooling2d - (-1, 56, 56, 64) 0 0 0 0 0 |
| batch_normalization_1 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| quant_conv2d 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| concatenate - (-1, 56, 56, 128) 0 0 0 ? ? |
| batch_normalization_2 - (-1, 56, 56, 128) 0 256 1.00 0 0 |
| quant_conv2d_1 1 (-1, 56, 56, 64) 73728 0 9.00 231211008 0 |
| concatenate_1 - (-1, 56, 56, 192) 0 0 0 ? ? |
| batch_normalization_3 - (-1, 56, 56, 192) 0 384 1.50 0 0 |
| quant_conv2d_2 1 (-1, 56, 56, 64) 110592 0 13.50 346816512 0 |
| concatenate_2 - (-1, 56, 56, 256) 0 0 0 ? ? |
| batch_normalization_4 - (-1, 56, 56, 256) 0 512 2.00 0 0 |
| quant_conv2d_3 1 (-1, 56, 56, 64) 147456 0 18.00 462422016 0 |
| concatenate_3 - (-1, 56, 56, 320) 0 0 0 ? ? |
| batch_normalization_5 - (-1, 56, 56, 320) 0 640 2.50 0 0 |
| quant_conv2d_4 1 (-1, 56, 56, 64) 184320 0 22.50 578027520 0 |
| concatenate_4 - (-1, 56, 56, 384) 0 0 0 ? ? |
| batch_normalization_6 - (-1, 56, 56, 384) 0 768 3.00 0 0 |
| quant_conv2d_5 1 (-1, 56, 56, 64) 221184 0 27.00 693633024 0 |
| concatenate_5 - (-1, 56, 56, 448) 0 0 0 ? ? |
| batch_normalization_7 - (-1, 56, 56, 448) 0 896 3.50 0 0 |
| max_pooling2d_1 - (-1, 28, 28, 448) 0 0 0 0 0 |
| activation_1 - (-1, 28, 28, 448) 0 0 0 ? ? |
| conv2d_1 - (-1, 28, 28, 160) 0 71680 280.00 0 56197120 |
| batch_normalization_8 - (-1, 28, 28, 160) 0 320 1.25 0 0 |
| quant_conv2d_6 1 (-1, 28, 28, 64) 92160 0 11.25 72253440 0 |
| concatenate_6 - (-1, 28, 28, 224) 0 0 0 ? ? |
| batch_normalization_9 - (-1, 28, 28, 224) 0 448 1.75 0 0 |
| quant_conv2d_7 1 (-1, 28, 28, 64) 129024 0 15.75 101154816 0 |
| concatenate_7 - (-1, 28, 28, 288) 0 0 0 ? ? |
| batch_normalization_10 - (-1, 28, 28, 288) 0 576 2.25 0 0 |
| quant_conv2d_8 1 (-1, 28, 28, 64) 165888 0 20.25 130056192 0 |
| concatenate_8 - (-1, 28, 28, 352) 0 0 0 ? ? |
| batch_normalization_11 - (-1, 28, 28, 352) 0 704 2.75 0 0 |
| quant_conv2d_9 1 (-1, 28, 28, 64) 202752 0 24.75 158957568 0 |
| concatenate_9 - (-1, 28, 28, 416) 0 0 0 ? ? |
| batch_normalization_12 - (-1, 28, 28, 416) 0 832 3.25 0 0 |
| quant_conv2d_10 1 (-1, 28, 28, 64) 239616 0 29.25 187858944 0 |
| concatenate_10 - (-1, 28, 28, 480) 0 0 0 ? ? |
| batch_normalization_13 - (-1, 28, 28, 480) 0 960 3.75 0 0 |
| quant_conv2d_11 1 (-1, 28, 28, 64) 276480 0 33.75 216760320 0 |
| concatenate_11 - (-1, 28, 28, 544) 0 0 0 ? ? |
| batch_normalization_14 - (-1, 28, 28, 544) 0 1088 4.25 0 0 |
| max_pooling2d_2 - (-1, 14, 14, 544) 0 0 0 0 0 |
| activation_2 - (-1, 14, 14, 544) 0 0 0 ? ? |
| conv2d_2 - (-1, 14, 14, 192) 0 104448 408.00 0 20471808 |
| batch_normalization_15 - (-1, 14, 14, 192) 0 384 1.50 0 0 |
| quant_conv2d_12 1 (-1, 14, 14, 64) 110592 0 13.50 21676032 0 |
| concatenate_12 - (-1, 14, 14, 256) 0 0 0 ? ? |
| batch_normalization_16 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| quant_conv2d_13 1 (-1, 14, 14, 64) 147456 0 18.00 28901376 0 |
| concatenate_13 - (-1, 14, 14, 320) 0 0 0 ? ? |
| batch_normalization_17 - (-1, 14, 14, 320) 0 640 2.50 0 0 |
| quant_conv2d_14 1 (-1, 14, 14, 64) 184320 0 22.50 36126720 0 |
| concatenate_14 - (-1, 14, 14, 384) 0 0 0 ? ? |
| batch_normalization_18 - (-1, 14, 14, 384) 0 768 3.00 0 0 |
| quant_conv2d_15 1 (-1, 14, 14, 64) 221184 0 27.00 43352064 0 |
| concatenate_15 - (-1, 14, 14, 448) 0 0 0 ? ? |
| batch_normalization_19 - (-1, 14, 14, 448) 0 896 3.50 0 0 |
| quant_conv2d_16 1 (-1, 14, 14, 64) 258048 0 31.50 50577408 0 |
| concatenate_16 - (-1, 14, 14, 512) 0 0 0 ? ? |
| batch_normalization_20 - (-1, 14, 14, 512) 0 1024 4.00 0 0 |
| quant_conv2d_17 1 (-1, 14, 14, 64) 294912 0 36.00 57802752 0 |
| concatenate_17 - (-1, 14, 14, 576) 0 0 0 ? ? |
| batch_normalization_21 - (-1, 14, 14, 576) 0 1152 4.50 0 0 |
| max_pooling2d_3 - (-1, 7, 7, 576) 0 0 0 0 0 |
| activation_3 - (-1, 7, 7, 576) 0 0 0 ? ? |
| conv2d_3 - (-1, 7, 7, 256) 0 147456 576.00 0 7225344 |
| batch_normalization_22 - (-1, 7, 7, 256) 0 512 2.00 0 0 |
| quant_conv2d_18 1 (-1, 7, 7, 64) 147456 0 18.00 7225344 0 |
| concatenate_18 - (-1, 7, 7, 320) 0 0 0 ? ? |
| batch_normalization_23 - (-1, 7, 7, 320) 0 640 2.50 0 0 |
| quant_conv2d_19 1 (-1, 7, 7, 64) 184320 0 22.50 9031680 0 |
| concatenate_19 - (-1, 7, 7, 384) 0 0 0 ? ? |
| batch_normalization_24 - (-1, 7, 7, 384) 0 768 3.00 0 0 |
| quant_conv2d_20 1 (-1, 7, 7, 64) 221184 0 27.00 10838016 0 |
| concatenate_20 - (-1, 7, 7, 448) 0 0 0 ? ? |
| batch_normalization_25 - (-1, 7, 7, 448) 0 896 3.50 0 0 |
| quant_conv2d_21 1 (-1, 7, 7, 64) 258048 0 31.50 12644352 0 |
| concatenate_21 - (-1, 7, 7, 512) 0 0 0 ? ? |
| batch_normalization_26 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| quant_conv2d_22 1 (-1, 7, 7, 64) 294912 0 36.00 14450688 0 |
| concatenate_22 - (-1, 7, 7, 576) 0 0 0 ? ? |
| batch_normalization_27 - (-1, 7, 7, 576) 0 1152 4.50 0 0 |
| activation_4 - (-1, 7, 7, 576) 0 0 0 ? ? |
| average_pooling2d - (-1, 1, 1, 576) 0 0 0 0 0 |
| flatten - (-1, 576) 0 0 0 0 0 |
| dense - (-1, 1000) 0 577000 2253.91 0 576000 |
| activation_5 - (-1, 1000) 0 0 0 ? ? |
+------------------------------------------------------------------------------------------------------------------+
| Total 4202496 929000 4141.91 3587383296 202484224 |
+------------------------------------------------------------------------------------------------------------------+
+binary_densenet28 summary--------------------+
| Total params 5.13 M |
| Trainable params 5.11 M |
| Non-trainable params 19 k |
| Model size 4.04 MiB |
| Model size (8-bit FP weights) 1.39 MiB |
| Float-32 Equivalent 19.58 MiB |
| Compression Ratio of Memory 0.21 |
| Number of MACs 3.79 B |
| Ratio of MACs that are binarized 0.9466 |
+---------------------------------------------+
ImageNet Metrics
Top-1 Accuracy | Top-5 Accuracy | Parameters | Memory |
---|---|---|---|
60.91 % | 82.83 % | 5 150 504 | 4.12 MB |
Arguments
- input_shape
Optional[Sequence[Optional[int]]]
: Optional shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels. - input_tensor
Optional[tf.Tensor]
: optional Keras tensor (i.e. output oflayers.Input()
) to use as image input for the model. - weights
Optional[str]
: one ofNone
(random initialization), "imagenet" (pre-training on ImageNet), or the path to the weights file to be loaded. - include_top
bool
: whether to include the fully-connected layer at the top of the network. - num_classes
int
: optional number of classes to classify images into, only to be specified ifinclude_top
is True, and if noweights
argument is specified.
Returns
A Keras model instance.
Raises
- ValueError: in case of invalid argument for
weights
, or invalid input shape.
References
BinaryDenseNet37¶
larq_zoo.literature.BinaryDenseNet37(
*,
input_shape=None,
input_tensor=None,
weights="imagenet",
include_top=True,
num_classes=1000
)
Instantiates the BinaryDenseNet 37 architecture.
Optionally loads weights pre-trained on ImageNet.
Model Summary
+binary_densenet37 stats-------------------------------------------------------------------------------------------+
| Layer Input prec. Outputs # 1-bit # 32-bit Memory 1-bit MACs 32-bit MACs |
| (bit) x 1 x 1 (kB) |
+------------------------------------------------------------------------------------------------------------------+
| input_1 - ((None, 224, 224, 3),) 0 0 0 ? ? |
| conv2d - (-1, 112, 112, 64) 0 9408 36.75 0 118013952 |
| batch_normalization - (-1, 112, 112, 64) 0 128 0.50 0 0 |
| activation - (-1, 112, 112, 64) 0 0 0 ? ? |
| max_pooling2d - (-1, 56, 56, 64) 0 0 0 0 0 |
| batch_normalization_1 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| quant_conv2d 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| concatenate - (-1, 56, 56, 128) 0 0 0 ? ? |
| batch_normalization_2 - (-1, 56, 56, 128) 0 256 1.00 0 0 |
| quant_conv2d_1 1 (-1, 56, 56, 64) 73728 0 9.00 231211008 0 |
| concatenate_1 - (-1, 56, 56, 192) 0 0 0 ? ? |
| batch_normalization_3 - (-1, 56, 56, 192) 0 384 1.50 0 0 |
| quant_conv2d_2 1 (-1, 56, 56, 64) 110592 0 13.50 346816512 0 |
| concatenate_2 - (-1, 56, 56, 256) 0 0 0 ? ? |
| batch_normalization_4 - (-1, 56, 56, 256) 0 512 2.00 0 0 |
| quant_conv2d_3 1 (-1, 56, 56, 64) 147456 0 18.00 462422016 0 |
| concatenate_3 - (-1, 56, 56, 320) 0 0 0 ? ? |
| batch_normalization_5 - (-1, 56, 56, 320) 0 640 2.50 0 0 |
| quant_conv2d_4 1 (-1, 56, 56, 64) 184320 0 22.50 578027520 0 |
| concatenate_4 - (-1, 56, 56, 384) 0 0 0 ? ? |
| batch_normalization_6 - (-1, 56, 56, 384) 0 768 3.00 0 0 |
| quant_conv2d_5 1 (-1, 56, 56, 64) 221184 0 27.00 693633024 0 |
| concatenate_5 - (-1, 56, 56, 448) 0 0 0 ? ? |
| batch_normalization_7 - (-1, 56, 56, 448) 0 896 3.50 0 0 |
| max_pooling2d_1 - (-1, 28, 28, 448) 0 0 0 0 0 |
| activation_1 - (-1, 28, 28, 448) 0 0 0 ? ? |
| conv2d_1 - (-1, 28, 28, 128) 0 57344 224.00 0 44957696 |
| batch_normalization_8 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| quant_conv2d_6 1 (-1, 28, 28, 64) 73728 0 9.00 57802752 0 |
| concatenate_6 - (-1, 28, 28, 192) 0 0 0 ? ? |
| batch_normalization_9 - (-1, 28, 28, 192) 0 384 1.50 0 0 |
| quant_conv2d_7 1 (-1, 28, 28, 64) 110592 0 13.50 86704128 0 |
| concatenate_7 - (-1, 28, 28, 256) 0 0 0 ? ? |
| batch_normalization_10 - (-1, 28, 28, 256) 0 512 2.00 0 0 |
| quant_conv2d_8 1 (-1, 28, 28, 64) 147456 0 18.00 115605504 0 |
| concatenate_8 - (-1, 28, 28, 320) 0 0 0 ? ? |
| batch_normalization_11 - (-1, 28, 28, 320) 0 640 2.50 0 0 |
| quant_conv2d_9 1 (-1, 28, 28, 64) 184320 0 22.50 144506880 0 |
| concatenate_9 - (-1, 28, 28, 384) 0 0 0 ? ? |
| batch_normalization_12 - (-1, 28, 28, 384) 0 768 3.00 0 0 |
| quant_conv2d_10 1 (-1, 28, 28, 64) 221184 0 27.00 173408256 0 |
| concatenate_10 - (-1, 28, 28, 448) 0 0 0 ? ? |
| batch_normalization_13 - (-1, 28, 28, 448) 0 896 3.50 0 0 |
| quant_conv2d_11 1 (-1, 28, 28, 64) 258048 0 31.50 202309632 0 |
| concatenate_11 - (-1, 28, 28, 512) 0 0 0 ? ? |
| batch_normalization_14 - (-1, 28, 28, 512) 0 1024 4.00 0 0 |
| quant_conv2d_12 1 (-1, 28, 28, 64) 294912 0 36.00 231211008 0 |
| concatenate_12 - (-1, 28, 28, 576) 0 0 0 ? ? |
| batch_normalization_15 - (-1, 28, 28, 576) 0 1152 4.50 0 0 |
| quant_conv2d_13 1 (-1, 28, 28, 64) 331776 0 40.50 260112384 0 |
| concatenate_13 - (-1, 28, 28, 640) 0 0 0 ? ? |
| batch_normalization_16 - (-1, 28, 28, 640) 0 1280 5.00 0 0 |
| max_pooling2d_2 - (-1, 14, 14, 640) 0 0 0 0 0 |
| activation_2 - (-1, 14, 14, 640) 0 0 0 ? ? |
| conv2d_2 - (-1, 14, 14, 192) 0 122880 480.00 0 24084480 |
| batch_normalization_17 - (-1, 14, 14, 192) 0 384 1.50 0 0 |
| quant_conv2d_14 1 (-1, 14, 14, 64) 110592 0 13.50 21676032 0 |
| concatenate_14 - (-1, 14, 14, 256) 0 0 0 ? ? |
| batch_normalization_18 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| quant_conv2d_15 1 (-1, 14, 14, 64) 147456 0 18.00 28901376 0 |
| concatenate_15 - (-1, 14, 14, 320) 0 0 0 ? ? |
| batch_normalization_19 - (-1, 14, 14, 320) 0 640 2.50 0 0 |
| quant_conv2d_16 1 (-1, 14, 14, 64) 184320 0 22.50 36126720 0 |
| concatenate_16 - (-1, 14, 14, 384) 0 0 0 ? ? |
| batch_normalization_20 - (-1, 14, 14, 384) 0 768 3.00 0 0 |
| quant_conv2d_17 1 (-1, 14, 14, 64) 221184 0 27.00 43352064 0 |
| concatenate_17 - (-1, 14, 14, 448) 0 0 0 ? ? |
| batch_normalization_21 - (-1, 14, 14, 448) 0 896 3.50 0 0 |
| quant_conv2d_18 1 (-1, 14, 14, 64) 258048 0 31.50 50577408 0 |
| concatenate_18 - (-1, 14, 14, 512) 0 0 0 ? ? |
| batch_normalization_22 - (-1, 14, 14, 512) 0 1024 4.00 0 0 |
| quant_conv2d_19 1 (-1, 14, 14, 64) 294912 0 36.00 57802752 0 |
| concatenate_19 - (-1, 14, 14, 576) 0 0 0 ? ? |
| batch_normalization_23 - (-1, 14, 14, 576) 0 1152 4.50 0 0 |
| quant_conv2d_20 1 (-1, 14, 14, 64) 331776 0 40.50 65028096 0 |
| concatenate_20 - (-1, 14, 14, 640) 0 0 0 ? ? |
| batch_normalization_24 - (-1, 14, 14, 640) 0 1280 5.00 0 0 |
| quant_conv2d_21 1 (-1, 14, 14, 64) 368640 0 45.00 72253440 0 |
| concatenate_21 - (-1, 14, 14, 704) 0 0 0 ? ? |
| batch_normalization_25 - (-1, 14, 14, 704) 0 1408 5.50 0 0 |
| quant_conv2d_22 1 (-1, 14, 14, 64) 405504 0 49.50 79478784 0 |
| concatenate_22 - (-1, 14, 14, 768) 0 0 0 ? ? |
| batch_normalization_26 - (-1, 14, 14, 768) 0 1536 6.00 0 0 |
| quant_conv2d_23 1 (-1, 14, 14, 64) 442368 0 54.00 86704128 0 |
| concatenate_23 - (-1, 14, 14, 832) 0 0 0 ? ? |
| batch_normalization_27 - (-1, 14, 14, 832) 0 1664 6.50 0 0 |
| quant_conv2d_24 1 (-1, 14, 14, 64) 479232 0 58.50 93929472 0 |
| concatenate_24 - (-1, 14, 14, 896) 0 0 0 ? ? |
| batch_normalization_28 - (-1, 14, 14, 896) 0 1792 7.00 0 0 |
| quant_conv2d_25 1 (-1, 14, 14, 64) 516096 0 63.00 101154816 0 |
| concatenate_25 - (-1, 14, 14, 960) 0 0 0 ? ? |
| batch_normalization_29 - (-1, 14, 14, 960) 0 1920 7.50 0 0 |
| max_pooling2d_3 - (-1, 7, 7, 960) 0 0 0 0 0 |
| activation_3 - (-1, 7, 7, 960) 0 0 0 ? ? |
| conv2d_3 - (-1, 7, 7, 256) 0 245760 960.00 0 12042240 |
| batch_normalization_30 - (-1, 7, 7, 256) 0 512 2.00 0 0 |
| quant_conv2d_26 1 (-1, 7, 7, 64) 147456 0 18.00 7225344 0 |
| concatenate_26 - (-1, 7, 7, 320) 0 0 0 ? ? |
| batch_normalization_31 - (-1, 7, 7, 320) 0 640 2.50 0 0 |
| quant_conv2d_27 1 (-1, 7, 7, 64) 184320 0 22.50 9031680 0 |
| concatenate_27 - (-1, 7, 7, 384) 0 0 0 ? ? |
| batch_normalization_32 - (-1, 7, 7, 384) 0 768 3.00 0 0 |
| quant_conv2d_28 1 (-1, 7, 7, 64) 221184 0 27.00 10838016 0 |
| concatenate_28 - (-1, 7, 7, 448) 0 0 0 ? ? |
| batch_normalization_33 - (-1, 7, 7, 448) 0 896 3.50 0 0 |
| quant_conv2d_29 1 (-1, 7, 7, 64) 258048 0 31.50 12644352 0 |
| concatenate_29 - (-1, 7, 7, 512) 0 0 0 ? ? |
| batch_normalization_34 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| quant_conv2d_30 1 (-1, 7, 7, 64) 294912 0 36.00 14450688 0 |
| concatenate_30 - (-1, 7, 7, 576) 0 0 0 ? ? |
| batch_normalization_35 - (-1, 7, 7, 576) 0 1152 4.50 0 0 |
| quant_conv2d_31 1 (-1, 7, 7, 64) 331776 0 40.50 16257024 0 |
| concatenate_31 - (-1, 7, 7, 640) 0 0 0 ? ? |
| batch_normalization_36 - (-1, 7, 7, 640) 0 1280 5.00 0 0 |
| activation_4 - (-1, 7, 7, 640) 0 0 0 ? ? |
| average_pooling2d - (-1, 1, 1, 640) 0 0 0 0 0 |
| flatten - (-1, 640) 0 0 0 0 0 |
| dense - (-1, 1000) 0 641000 2503.91 0 640000 |
| activation_5 - (-1, 1000) 0 0 0 ? ? |
+------------------------------------------------------------------------------------------------------------------+
| Total 7593984 1108264 5256.16 4506808320 199738368 |
+------------------------------------------------------------------------------------------------------------------+
+binary_densenet37 summary--------------------+
| Total params 8.7 M |
| Trainable params 8.67 M |
| Non-trainable params 31.9 k |
| Model size 5.13 MiB |
| Model size (8-bit FP weights) 1.96 MiB |
| Float-32 Equivalent 33.20 MiB |
| Compression Ratio of Memory 0.15 |
| Number of MACs 4.71 B |
| Ratio of MACs that are binarized 0.9576 |
+---------------------------------------------+
ImageNet Metrics
Top-1 Accuracy | Top-5 Accuracy | Parameters | Memory |
---|---|---|---|
62.89 % | 84.19 % | 8 734 120 | 5.25 MB |
Arguments
- input_shape
Optional[Sequence[Optional[int]]]
: Optional shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels. - input_tensor
Optional[tf.Tensor]
: optional Keras tensor (i.e. output oflayers.Input()
) to use as image input for the model. - weights
Optional[str]
: one ofNone
(random initialization), "imagenet" (pre-training on ImageNet), or the path to the weights file to be loaded. - include_top
bool
: whether to include the fully-connected layer at the top of the network. - num_classes
int
: optional number of classes to classify images into, only to be specified ifinclude_top
is True, and if noweights
argument is specified.
Returns
A Keras model instance.
Raises
- ValueError: in case of invalid argument for
weights
, or invalid input shape.
References
BinaryDenseNet37Dilated¶
larq_zoo.literature.BinaryDenseNet37Dilated(
*,
input_shape=None,
input_tensor=None,
weights="imagenet",
include_top=True,
num_classes=1000
)
Instantiates the BinaryDenseNet 37Dilated architecture.
Optionally loads weights pre-trained on ImageNet.
Model Summary
+binary_densenet37_dilated stats-----------------------------------------------------------------------------------+
| Layer Input prec. Outputs # 1-bit # 32-bit Memory 1-bit MACs 32-bit MACs |
| (bit) x 1 x 1 (kB) |
+------------------------------------------------------------------------------------------------------------------+
| input_1 - ((None, 224, 224, 3),) 0 0 0 ? ? |
| conv2d - (-1, 112, 112, 64) 0 9408 36.75 0 118013952 |
| batch_normalization - (-1, 112, 112, 64) 0 128 0.50 0 0 |
| activation - (-1, 112, 112, 64) 0 0 0 ? ? |
| max_pooling2d - (-1, 56, 56, 64) 0 0 0 0 0 |
| batch_normalization_1 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| quant_conv2d 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| concatenate - (-1, 56, 56, 128) 0 0 0 ? ? |
| batch_normalization_2 - (-1, 56, 56, 128) 0 256 1.00 0 0 |
| quant_conv2d_1 1 (-1, 56, 56, 64) 73728 0 9.00 231211008 0 |
| concatenate_1 - (-1, 56, 56, 192) 0 0 0 ? ? |
| batch_normalization_3 - (-1, 56, 56, 192) 0 384 1.50 0 0 |
| quant_conv2d_2 1 (-1, 56, 56, 64) 110592 0 13.50 346816512 0 |
| concatenate_2 - (-1, 56, 56, 256) 0 0 0 ? ? |
| batch_normalization_4 - (-1, 56, 56, 256) 0 512 2.00 0 0 |
| quant_conv2d_3 1 (-1, 56, 56, 64) 147456 0 18.00 462422016 0 |
| concatenate_3 - (-1, 56, 56, 320) 0 0 0 ? ? |
| batch_normalization_5 - (-1, 56, 56, 320) 0 640 2.50 0 0 |
| quant_conv2d_4 1 (-1, 56, 56, 64) 184320 0 22.50 578027520 0 |
| concatenate_4 - (-1, 56, 56, 384) 0 0 0 ? ? |
| batch_normalization_6 - (-1, 56, 56, 384) 0 768 3.00 0 0 |
| quant_conv2d_5 1 (-1, 56, 56, 64) 221184 0 27.00 693633024 0 |
| concatenate_5 - (-1, 56, 56, 448) 0 0 0 ? ? |
| batch_normalization_7 - (-1, 56, 56, 448) 0 896 3.50 0 0 |
| max_pooling2d_1 - (-1, 28, 28, 448) 0 0 0 0 0 |
| activation_1 - (-1, 28, 28, 448) 0 0 0 ? ? |
| conv2d_1 - (-1, 28, 28, 128) 0 57344 224.00 0 44957696 |
| batch_normalization_8 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| quant_conv2d_6 1 (-1, 28, 28, 64) 73728 0 9.00 57802752 0 |
| concatenate_6 - (-1, 28, 28, 192) 0 0 0 ? ? |
| batch_normalization_9 - (-1, 28, 28, 192) 0 384 1.50 0 0 |
| quant_conv2d_7 1 (-1, 28, 28, 64) 110592 0 13.50 86704128 0 |
| concatenate_7 - (-1, 28, 28, 256) 0 0 0 ? ? |
| batch_normalization_10 - (-1, 28, 28, 256) 0 512 2.00 0 0 |
| quant_conv2d_8 1 (-1, 28, 28, 64) 147456 0 18.00 115605504 0 |
| concatenate_8 - (-1, 28, 28, 320) 0 0 0 ? ? |
| batch_normalization_11 - (-1, 28, 28, 320) 0 640 2.50 0 0 |
| quant_conv2d_9 1 (-1, 28, 28, 64) 184320 0 22.50 144506880 0 |
| concatenate_9 - (-1, 28, 28, 384) 0 0 0 ? ? |
| batch_normalization_12 - (-1, 28, 28, 384) 0 768 3.00 0 0 |
| quant_conv2d_10 1 (-1, 28, 28, 64) 221184 0 27.00 173408256 0 |
| concatenate_10 - (-1, 28, 28, 448) 0 0 0 ? ? |
| batch_normalization_13 - (-1, 28, 28, 448) 0 896 3.50 0 0 |
| quant_conv2d_11 1 (-1, 28, 28, 64) 258048 0 31.50 202309632 0 |
| concatenate_11 - (-1, 28, 28, 512) 0 0 0 ? ? |
| batch_normalization_14 - (-1, 28, 28, 512) 0 1024 4.00 0 0 |
| quant_conv2d_12 1 (-1, 28, 28, 64) 294912 0 36.00 231211008 0 |
| concatenate_12 - (-1, 28, 28, 576) 0 0 0 ? ? |
| batch_normalization_15 - (-1, 28, 28, 576) 0 1152 4.50 0 0 |
| quant_conv2d_13 1 (-1, 28, 28, 64) 331776 0 40.50 260112384 0 |
| concatenate_13 - (-1, 28, 28, 640) 0 0 0 ? ? |
| batch_normalization_16 - (-1, 28, 28, 640) 0 1280 5.00 0 0 |
| activation_2 - (-1, 28, 28, 640) 0 0 0 ? ? |
| conv2d_2 - (-1, 28, 28, 192) 0 122880 480.00 0 96337920 |
| batch_normalization_17 - (-1, 28, 28, 192) 0 384 1.50 0 0 |
| quant_conv2d_14 1 (-1, 28, 28, 64) 110592 0 13.50 86704128 0 |
| concatenate_14 - (-1, 28, 28, 256) 0 0 0 ? ? |
| batch_normalization_18 - (-1, 28, 28, 256) 0 512 2.00 0 0 |
| quant_conv2d_15 1 (-1, 28, 28, 64) 147456 0 18.00 115605504 0 |
| concatenate_15 - (-1, 28, 28, 320) 0 0 0 ? ? |
| batch_normalization_19 - (-1, 28, 28, 320) 0 640 2.50 0 0 |
| quant_conv2d_16 1 (-1, 28, 28, 64) 184320 0 22.50 144506880 0 |
| concatenate_16 - (-1, 28, 28, 384) 0 0 0 ? ? |
| batch_normalization_20 - (-1, 28, 28, 384) 0 768 3.00 0 0 |
| quant_conv2d_17 1 (-1, 28, 28, 64) 221184 0 27.00 173408256 0 |
| concatenate_17 - (-1, 28, 28, 448) 0 0 0 ? ? |
| batch_normalization_21 - (-1, 28, 28, 448) 0 896 3.50 0 0 |
| quant_conv2d_18 1 (-1, 28, 28, 64) 258048 0 31.50 202309632 0 |
| concatenate_18 - (-1, 28, 28, 512) 0 0 0 ? ? |
| batch_normalization_22 - (-1, 28, 28, 512) 0 1024 4.00 0 0 |
| quant_conv2d_19 1 (-1, 28, 28, 64) 294912 0 36.00 231211008 0 |
| concatenate_19 - (-1, 28, 28, 576) 0 0 0 ? ? |
| batch_normalization_23 - (-1, 28, 28, 576) 0 1152 4.50 0 0 |
| quant_conv2d_20 1 (-1, 28, 28, 64) 331776 0 40.50 260112384 0 |
| concatenate_20 - (-1, 28, 28, 640) 0 0 0 ? ? |
| batch_normalization_24 - (-1, 28, 28, 640) 0 1280 5.00 0 0 |
| quant_conv2d_21 1 (-1, 28, 28, 64) 368640 0 45.00 289013760 0 |
| concatenate_21 - (-1, 28, 28, 704) 0 0 0 ? ? |
| batch_normalization_25 - (-1, 28, 28, 704) 0 1408 5.50 0 0 |
| quant_conv2d_22 1 (-1, 28, 28, 64) 405504 0 49.50 317915136 0 |
| concatenate_22 - (-1, 28, 28, 768) 0 0 0 ? ? |
| batch_normalization_26 - (-1, 28, 28, 768) 0 1536 6.00 0 0 |
| quant_conv2d_23 1 (-1, 28, 28, 64) 442368 0 54.00 346816512 0 |
| concatenate_23 - (-1, 28, 28, 832) 0 0 0 ? ? |
| batch_normalization_27 - (-1, 28, 28, 832) 0 1664 6.50 0 0 |
| quant_conv2d_24 1 (-1, 28, 28, 64) 479232 0 58.50 375717888 0 |
| concatenate_24 - (-1, 28, 28, 896) 0 0 0 ? ? |
| batch_normalization_28 - (-1, 28, 28, 896) 0 1792 7.00 0 0 |
| quant_conv2d_25 1 (-1, 28, 28, 64) 516096 0 63.00 404619264 0 |
| concatenate_25 - (-1, 28, 28, 960) 0 0 0 ? ? |
| batch_normalization_29 - (-1, 28, 28, 960) 0 1920 7.50 0 0 |
| activation_3 - (-1, 28, 28, 960) 0 0 0 ? ? |
| conv2d_3 - (-1, 28, 28, 256) 0 245760 960.00 0 192675840 |
| batch_normalization_30 - (-1, 28, 28, 256) 0 512 2.00 0 0 |
| quant_conv2d_26 1 (-1, 28, 28, 64) 147456 0 18.00 115605504 0 |
| concatenate_26 - (-1, 28, 28, 320) 0 0 0 ? ? |
| batch_normalization_31 - (-1, 28, 28, 320) 0 640 2.50 0 0 |
| quant_conv2d_27 1 (-1, 28, 28, 64) 184320 0 22.50 144506880 0 |
| concatenate_27 - (-1, 28, 28, 384) 0 0 0 ? ? |
| batch_normalization_32 - (-1, 28, 28, 384) 0 768 3.00 0 0 |
| quant_conv2d_28 1 (-1, 28, 28, 64) 221184 0 27.00 173408256 0 |
| concatenate_28 - (-1, 28, 28, 448) 0 0 0 ? ? |
| batch_normalization_33 - (-1, 28, 28, 448) 0 896 3.50 0 0 |
| quant_conv2d_29 1 (-1, 28, 28, 64) 258048 0 31.50 202309632 0 |
| concatenate_29 - (-1, 28, 28, 512) 0 0 0 ? ? |
| batch_normalization_34 - (-1, 28, 28, 512) 0 1024 4.00 0 0 |
| quant_conv2d_30 1 (-1, 28, 28, 64) 294912 0 36.00 231211008 0 |
| concatenate_30 - (-1, 28, 28, 576) 0 0 0 ? ? |
| batch_normalization_35 - (-1, 28, 28, 576) 0 1152 4.50 0 0 |
| quant_conv2d_31 1 (-1, 28, 28, 64) 331776 0 40.50 260112384 0 |
| concatenate_31 - (-1, 28, 28, 640) 0 0 0 ? ? |
| batch_normalization_36 - (-1, 28, 28, 640) 0 1280 5.00 0 0 |
| activation_4 - (-1, 28, 28, 640) 0 0 0 ? ? |
| average_pooling2d - (-1, 1, 1, 640) 0 0 0 0 0 |
| flatten - (-1, 640) 0 0 0 0 0 |
| dense - (-1, 1000) 0 641000 2503.91 0 640000 |
| activation_5 - (-1, 1000) 0 0 0 ? ? |
+------------------------------------------------------------------------------------------------------------------+
| Total 7593984 1108264 5256.16 7774470144 452625408 |
+------------------------------------------------------------------------------------------------------------------+
+binary_densenet37_dilated summary------------+
| Total params 8.7 M |
| Trainable params 8.67 M |
| Non-trainable params 31.9 k |
| Model size 5.13 MiB |
| Model size (8-bit FP weights) 1.96 MiB |
| Float-32 Equivalent 33.20 MiB |
| Compression Ratio of Memory 0.15 |
| Number of MACs 8.23 B |
| Ratio of MACs that are binarized 0.9450 |
+---------------------------------------------+
ImageNet Metrics
Top-1 Accuracy | Top-5 Accuracy | Parameters | Memory |
---|---|---|---|
64.34 % | 85.15 % | 8 734 120 | 5.25 MB |
Arguments
- input_shape
Optional[Sequence[Optional[int]]]
: Optional shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels. - input_tensor
Optional[tf.Tensor]
: optional Keras tensor (i.e. output oflayers.Input()
) to use as image input for the model. - weights
Optional[str]
: one ofNone
(random initialization), "imagenet" (pre-training on ImageNet), or the path to the weights file to be loaded. - include_top
bool
: whether to include the fully-connected layer at the top of the network. - num_classes
int
: optional number of classes to classify images into, only to be specified ifinclude_top
is True, and if noweights
argument is specified.
Returns
A Keras model instance.
Raises
- ValueError: in case of invalid argument for
weights
, or invalid input shape.
References
BinaryDenseNet45¶
larq_zoo.literature.BinaryDenseNet45(
*,
input_shape=None,
input_tensor=None,
weights="imagenet",
include_top=True,
num_classes=1000
)
Instantiates the BinaryDenseNet 45 architecture.
Optionally loads weights pre-trained on ImageNet.
Model Summary
+binary_densenet45 stats--------------------------------------------------------------------------------------------+
| Layer Input prec. Outputs # 1-bit # 32-bit Memory 1-bit MACs 32-bit MACs |
| (bit) x 1 x 1 (kB) |
+-------------------------------------------------------------------------------------------------------------------+
| input_1 - ((None, 224, 224, 3),) 0 0 0 ? ? |
| conv2d - (-1, 112, 112, 64) 0 9408 36.75 0 118013952 |
| batch_normalization - (-1, 112, 112, 64) 0 128 0.50 0 0 |
| activation - (-1, 112, 112, 64) 0 0 0 ? ? |
| max_pooling2d - (-1, 56, 56, 64) 0 0 0 0 0 |
| batch_normalization_1 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| quant_conv2d 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| concatenate - (-1, 56, 56, 128) 0 0 0 ? ? |
| batch_normalization_2 - (-1, 56, 56, 128) 0 256 1.00 0 0 |
| quant_conv2d_1 1 (-1, 56, 56, 64) 73728 0 9.00 231211008 0 |
| concatenate_1 - (-1, 56, 56, 192) 0 0 0 ? ? |
| batch_normalization_3 - (-1, 56, 56, 192) 0 384 1.50 0 0 |
| quant_conv2d_2 1 (-1, 56, 56, 64) 110592 0 13.50 346816512 0 |
| concatenate_2 - (-1, 56, 56, 256) 0 0 0 ? ? |
| batch_normalization_4 - (-1, 56, 56, 256) 0 512 2.00 0 0 |
| quant_conv2d_3 1 (-1, 56, 56, 64) 147456 0 18.00 462422016 0 |
| concatenate_3 - (-1, 56, 56, 320) 0 0 0 ? ? |
| batch_normalization_5 - (-1, 56, 56, 320) 0 640 2.50 0 0 |
| quant_conv2d_4 1 (-1, 56, 56, 64) 184320 0 22.50 578027520 0 |
| concatenate_4 - (-1, 56, 56, 384) 0 0 0 ? ? |
| batch_normalization_6 - (-1, 56, 56, 384) 0 768 3.00 0 0 |
| quant_conv2d_5 1 (-1, 56, 56, 64) 221184 0 27.00 693633024 0 |
| concatenate_5 - (-1, 56, 56, 448) 0 0 0 ? ? |
| batch_normalization_7 - (-1, 56, 56, 448) 0 896 3.50 0 0 |
| max_pooling2d_1 - (-1, 28, 28, 448) 0 0 0 0 0 |
| activation_1 - (-1, 28, 28, 448) 0 0 0 ? ? |
| conv2d_1 - (-1, 28, 28, 160) 0 71680 280.00 0 56197120 |
| batch_normalization_8 - (-1, 28, 28, 160) 0 320 1.25 0 0 |
| quant_conv2d_6 1 (-1, 28, 28, 64) 92160 0 11.25 72253440 0 |
| concatenate_6 - (-1, 28, 28, 224) 0 0 0 ? ? |
| batch_normalization_9 - (-1, 28, 28, 224) 0 448 1.75 0 0 |
| quant_conv2d_7 1 (-1, 28, 28, 64) 129024 0 15.75 101154816 0 |
| concatenate_7 - (-1, 28, 28, 288) 0 0 0 ? ? |
| batch_normalization_10 - (-1, 28, 28, 288) 0 576 2.25 0 0 |
| quant_conv2d_8 1 (-1, 28, 28, 64) 165888 0 20.25 130056192 0 |
| concatenate_8 - (-1, 28, 28, 352) 0 0 0 ? ? |
| batch_normalization_11 - (-1, 28, 28, 352) 0 704 2.75 0 0 |
| quant_conv2d_9 1 (-1, 28, 28, 64) 202752 0 24.75 158957568 0 |
| concatenate_9 - (-1, 28, 28, 416) 0 0 0 ? ? |
| batch_normalization_12 - (-1, 28, 28, 416) 0 832 3.25 0 0 |
| quant_conv2d_10 1 (-1, 28, 28, 64) 239616 0 29.25 187858944 0 |
| concatenate_10 - (-1, 28, 28, 480) 0 0 0 ? ? |
| batch_normalization_13 - (-1, 28, 28, 480) 0 960 3.75 0 0 |
| quant_conv2d_11 1 (-1, 28, 28, 64) 276480 0 33.75 216760320 0 |
| concatenate_11 - (-1, 28, 28, 544) 0 0 0 ? ? |
| batch_normalization_14 - (-1, 28, 28, 544) 0 1088 4.25 0 0 |
| quant_conv2d_12 1 (-1, 28, 28, 64) 313344 0 38.25 245661696 0 |
| concatenate_12 - (-1, 28, 28, 608) 0 0 0 ? ? |
| batch_normalization_15 - (-1, 28, 28, 608) 0 1216 4.75 0 0 |
| quant_conv2d_13 1 (-1, 28, 28, 64) 350208 0 42.75 274563072 0 |
| concatenate_13 - (-1, 28, 28, 672) 0 0 0 ? ? |
| batch_normalization_16 - (-1, 28, 28, 672) 0 1344 5.25 0 0 |
| quant_conv2d_14 1 (-1, 28, 28, 64) 387072 0 47.25 303464448 0 |
| concatenate_14 - (-1, 28, 28, 736) 0 0 0 ? ? |
| batch_normalization_17 - (-1, 28, 28, 736) 0 1472 5.75 0 0 |
| quant_conv2d_15 1 (-1, 28, 28, 64) 423936 0 51.75 332365824 0 |
| concatenate_15 - (-1, 28, 28, 800) 0 0 0 ? ? |
| batch_normalization_18 - (-1, 28, 28, 800) 0 1600 6.25 0 0 |
| quant_conv2d_16 1 (-1, 28, 28, 64) 460800 0 56.25 361267200 0 |
| concatenate_16 - (-1, 28, 28, 864) 0 0 0 ? ? |
| batch_normalization_19 - (-1, 28, 28, 864) 0 1728 6.75 0 0 |
| quant_conv2d_17 1 (-1, 28, 28, 64) 497664 0 60.75 390168576 0 |
| concatenate_17 - (-1, 28, 28, 928) 0 0 0 ? ? |
| batch_normalization_20 - (-1, 28, 28, 928) 0 1856 7.25 0 0 |
| max_pooling2d_2 - (-1, 14, 14, 928) 0 0 0 0 0 |
| activation_2 - (-1, 14, 14, 928) 0 0 0 ? ? |
| conv2d_2 - (-1, 14, 14, 288) 0 267264 1044.00 0 52383744 |
| batch_normalization_21 - (-1, 14, 14, 288) 0 576 2.25 0 0 |
| quant_conv2d_18 1 (-1, 14, 14, 64) 165888 0 20.25 32514048 0 |
| concatenate_18 - (-1, 14, 14, 352) 0 0 0 ? ? |
| batch_normalization_22 - (-1, 14, 14, 352) 0 704 2.75 0 0 |
| quant_conv2d_19 1 (-1, 14, 14, 64) 202752 0 24.75 39739392 0 |
| concatenate_19 - (-1, 14, 14, 416) 0 0 0 ? ? |
| batch_normalization_23 - (-1, 14, 14, 416) 0 832 3.25 0 0 |
| quant_conv2d_20 1 (-1, 14, 14, 64) 239616 0 29.25 46964736 0 |
| concatenate_20 - (-1, 14, 14, 480) 0 0 0 ? ? |
| batch_normalization_24 - (-1, 14, 14, 480) 0 960 3.75 0 0 |
| quant_conv2d_21 1 (-1, 14, 14, 64) 276480 0 33.75 54190080 0 |
| concatenate_21 - (-1, 14, 14, 544) 0 0 0 ? ? |
| batch_normalization_25 - (-1, 14, 14, 544) 0 1088 4.25 0 0 |
| quant_conv2d_22 1 (-1, 14, 14, 64) 313344 0 38.25 61415424 0 |
| concatenate_22 - (-1, 14, 14, 608) 0 0 0 ? ? |
| batch_normalization_26 - (-1, 14, 14, 608) 0 1216 4.75 0 0 |
| quant_conv2d_23 1 (-1, 14, 14, 64) 350208 0 42.75 68640768 0 |
| concatenate_23 - (-1, 14, 14, 672) 0 0 0 ? ? |
| batch_normalization_27 - (-1, 14, 14, 672) 0 1344 5.25 0 0 |
| quant_conv2d_24 1 (-1, 14, 14, 64) 387072 0 47.25 75866112 0 |
| concatenate_24 - (-1, 14, 14, 736) 0 0 0 ? ? |
| batch_normalization_28 - (-1, 14, 14, 736) 0 1472 5.75 0 0 |
| quant_conv2d_25 1 (-1, 14, 14, 64) 423936 0 51.75 83091456 0 |
| concatenate_25 - (-1, 14, 14, 800) 0 0 0 ? ? |
| batch_normalization_29 - (-1, 14, 14, 800) 0 1600 6.25 0 0 |
| quant_conv2d_26 1 (-1, 14, 14, 64) 460800 0 56.25 90316800 0 |
| concatenate_26 - (-1, 14, 14, 864) 0 0 0 ? ? |
| batch_normalization_30 - (-1, 14, 14, 864) 0 1728 6.75 0 0 |
| quant_conv2d_27 1 (-1, 14, 14, 64) 497664 0 60.75 97542144 0 |
| concatenate_27 - (-1, 14, 14, 928) 0 0 0 ? ? |
| batch_normalization_31 - (-1, 14, 14, 928) 0 1856 7.25 0 0 |
| quant_conv2d_28 1 (-1, 14, 14, 64) 534528 0 65.25 104767488 0 |
| concatenate_28 - (-1, 14, 14, 992) 0 0 0 ? ? |
| batch_normalization_32 - (-1, 14, 14, 992) 0 1984 7.75 0 0 |
| quant_conv2d_29 1 (-1, 14, 14, 64) 571392 0 69.75 111992832 0 |
| concatenate_29 - (-1, 14, 14, 1056) 0 0 0 ? ? |
| batch_normalization_33 - (-1, 14, 14, 1056) 0 2112 8.25 0 0 |
| quant_conv2d_30 1 (-1, 14, 14, 64) 608256 0 74.25 119218176 0 |
| concatenate_30 - (-1, 14, 14, 1120) 0 0 0 ? ? |
| batch_normalization_34 - (-1, 14, 14, 1120) 0 2240 8.75 0 0 |
| quant_conv2d_31 1 (-1, 14, 14, 64) 645120 0 78.75 126443520 0 |
| concatenate_31 - (-1, 14, 14, 1184) 0 0 0 ? ? |
| batch_normalization_35 - (-1, 14, 14, 1184) 0 2368 9.25 0 0 |
| max_pooling2d_3 - (-1, 7, 7, 1184) 0 0 0 0 0 |
| activation_3 - (-1, 7, 7, 1184) 0 0 0 ? ? |
| conv2d_3 - (-1, 7, 7, 288) 0 340992 1332.00 0 16708608 |
| batch_normalization_36 - (-1, 7, 7, 288) 0 576 2.25 0 0 |
| quant_conv2d_32 1 (-1, 7, 7, 64) 165888 0 20.25 8128512 0 |
| concatenate_32 - (-1, 7, 7, 352) 0 0 0 ? ? |
| batch_normalization_37 - (-1, 7, 7, 352) 0 704 2.75 0 0 |
| quant_conv2d_33 1 (-1, 7, 7, 64) 202752 0 24.75 9934848 0 |
| concatenate_33 - (-1, 7, 7, 416) 0 0 0 ? ? |
| batch_normalization_38 - (-1, 7, 7, 416) 0 832 3.25 0 0 |
| quant_conv2d_34 1 (-1, 7, 7, 64) 239616 0 29.25 11741184 0 |
| concatenate_34 - (-1, 7, 7, 480) 0 0 0 ? ? |
| batch_normalization_39 - (-1, 7, 7, 480) 0 960 3.75 0 0 |
| quant_conv2d_35 1 (-1, 7, 7, 64) 276480 0 33.75 13547520 0 |
| concatenate_35 - (-1, 7, 7, 544) 0 0 0 ? ? |
| batch_normalization_40 - (-1, 7, 7, 544) 0 1088 4.25 0 0 |
| quant_conv2d_36 1 (-1, 7, 7, 64) 313344 0 38.25 15353856 0 |
| concatenate_36 - (-1, 7, 7, 608) 0 0 0 ? ? |
| batch_normalization_41 - (-1, 7, 7, 608) 0 1216 4.75 0 0 |
| quant_conv2d_37 1 (-1, 7, 7, 64) 350208 0 42.75 17160192 0 |
| concatenate_37 - (-1, 7, 7, 672) 0 0 0 ? ? |
| batch_normalization_42 - (-1, 7, 7, 672) 0 1344 5.25 0 0 |
| quant_conv2d_38 1 (-1, 7, 7, 64) 387072 0 47.25 18966528 0 |
| concatenate_38 - (-1, 7, 7, 736) 0 0 0 ? ? |
| batch_normalization_43 - (-1, 7, 7, 736) 0 1472 5.75 0 0 |
| quant_conv2d_39 1 (-1, 7, 7, 64) 423936 0 51.75 20772864 0 |
| concatenate_39 - (-1, 7, 7, 800) 0 0 0 ? ? |
| batch_normalization_44 - (-1, 7, 7, 800) 0 1600 6.25 0 0 |
| activation_4 - (-1, 7, 7, 800) 0 0 0 ? ? |
| average_pooling2d - (-1, 1, 1, 800) 0 0 0 0 0 |
| flatten - (-1, 800) 0 0 0 0 0 |
| dense - (-1, 1000) 0 801000 3128.91 0 800000 |
| activation_5 - (-1, 1000) 0 0 0 ? ? |
+-------------------------------------------------------------------------------------------------------------------+
| Total 12349440 1540072 7523.41 6430556160 244103424 |
+-------------------------------------------------------------------------------------------------------------------+
+binary_densenet45 summary--------------------+
| Total params 13.9 M |
| Trainable params 13.8 M |
| Non-trainable params 49.7 k |
| Model size 7.35 MiB |
| Model size (8-bit FP weights) 2.94 MiB |
| Float-32 Equivalent 52.98 MiB |
| Compression Ratio of Memory 0.14 |
| Number of MACs 6.67 B |
| Ratio of MACs that are binarized 0.9634 |
+---------------------------------------------+
ImageNet Metrics
Top-1 Accuracy | Top-5 Accuracy | Parameters | Memory |
---|---|---|---|
64.59 % | 85.21 % | 13 939 240 | 7.54 MB |
Arguments
- input_shape
Optional[Sequence[Optional[int]]]
: Optional shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels. - input_tensor
Optional[tf.Tensor]
: optional Keras tensor (i.e. output oflayers.Input()
) to use as image input for the model. - weights
Optional[str]
: one ofNone
(random initialization), "imagenet" (pre-training on ImageNet), or the path to the weights file to be loaded. - include_top
bool
: whether to include the fully-connected layer at the top of the network. - num_classes
int
: optional number of classes to classify images into, only to be specified ifinclude_top
is True, and if noweights
argument is specified.
Returns
A Keras model instance.
Raises
- ValueError: in case of invalid argument for
weights
, or invalid input shape.
References
DoReFaNet¶
larq_zoo.literature.DoReFaNet(
*,
input_shape=None,
input_tensor=None,
weights="imagenet",
include_top=True,
num_classes=1000
)
Instantiates the DoReFa-net architecture.
Optionally loads weights pre-trained on ImageNet.
Model Summary
+dorefanet stats----------------------------------------------------------------------------------------------------+
| Layer Input prec. Outputs # 1-bit # 32-bit Memory 2-bit MACs 32-bit MACs |
| (bit) x 1 x 1 (kB) |
+-------------------------------------------------------------------------------------------------------------------+
| input_1 - ((None, 224, 224, 3),) 0 0 0 ? ? |
| conv2d - (-1, 54, 54, 96) 0 41568 162.38 0 120932352 |
| quant_conv2d 2 (-1, 54, 54, 256) 614400 0 75.00 1791590400 0 |
| batch_normalization - (-1, 54, 54, 256) 0 512 2.00 0 0 |
| max_pooling2d - (-1, 27, 27, 256) 0 0 0 0 0 |
| quant_conv2d_1 2 (-1, 27, 27, 384) 884736 0 108.00 644972544 0 |
| batch_normalization_1 - (-1, 27, 27, 384) 0 768 3.00 0 0 |
| max_pooling2d_1 - (-1, 14, 14, 384) 0 0 0 0 0 |
| quant_conv2d_2 2 (-1, 14, 14, 384) 1327104 0 162.00 260112384 0 |
| batch_normalization_2 - (-1, 14, 14, 384) 0 768 3.00 0 0 |
| quant_conv2d_3 2 (-1, 14, 14, 256) 884736 0 108.00 173408256 0 |
| batch_normalization_3 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| max_pooling2d_2 - (-1, 6, 6, 256) 0 0 0 0 0 |
| flatten - (-1, 9216) 0 0 0 0 0 |
| quant_dense 2 (-1, 4096) 37748736 0 4608.00 37748736 0 |
| batch_normalization_4 - (-1, 4096) 0 8192 32.00 0 0 |
| quant_dense_1 2 (-1, 4096) 16777216 0 2048.00 16777216 0 |
| batch_normalization_5 - (-1, 4096) 0 8192 32.00 0 0 |
| activation - (-1, 4096) 0 0 0 ? ? |
| dense - (-1, 1000) 0 4097000 16003.91 0 4096000 |
| activation_1 - (-1, 1000) 0 0 0 ? ? |
+-------------------------------------------------------------------------------------------------------------------+
| Total 58236928 4157512 23349.28 2924609536 125028352 |
+-------------------------------------------------------------------------------------------------------------------+
+dorefanet summary------------------------------+
| Total params 62.4 M |
| Trainable params 62.4 M |
| Non-trainable params 18.9 k |
| Model size 22.80 MiB |
| Model size (8-bit FP weights) 10.91 MiB |
| Float-32 Equivalent 238.02 MiB |
| Compression Ratio of Memory 0.10 |
| Number of MACs 3.05 B |
| Ratio of MACs that are ternarized 0.9590 |
+-----------------------------------------------+
ImageNet Metrics
Top-1 Accuracy | Top-5 Accuracy | Parameters | Memory |
---|---|---|---|
53.39 % | 76.50 % | 62 403 912 | 22.84 MB |
Arguments
- input_shape
Optional[Sequence[Optional[int]]]
: Optional shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels. - input_tensor
Optional[tf.Tensor]
: optional Keras tensor (i.e. output oflayers.Input()
) to use as image input for the model. - weights
Optional[str]
: one ofNone
(random initialization), "imagenet" (pre-training on ImageNet), or the path to the weights file to be loaded. - include_top
bool
: whether to include the fully-connected layer at the top of the network. - num_classes
int
: optional number of classes to classify images into, only to be specified ifinclude_top
is True, and if noweights
argument is specified.
Returns
A Keras model instance.
Raises
- ValueError: in case of invalid argument for
weights
, or invalid input shape.
References
RealToBinaryNet¶
larq_zoo.literature.RealToBinaryNet(
*,
input_shape=None,
input_tensor=None,
weights="imagenet",
include_top=True,
num_classes=1000
)
Instantiates the BNN version of the Real-to-Binary network from Martinez et. al.
Optionally loads weights pre-trained on ImageNet.
Model Summary
+r2b_bnn stats-----------------------------------------------------------------------------------------------------------------------+
| Layer Input prec. Outputs # 1-bit # 32-bit Memory 1-bit MACs 32-bit MACs |
| (bit) x 1 x 1 (kB) |
+------------------------------------------------------------------------------------------------------------------------------------+
| input_1 - ((None, 224, 224, 3),) 0 0 0 ? ? |
| r2b_bnn_block_1_conv2d - (-1, 112, 112, 64) 0 9408 36.75 0 118013952 |
| r2b_bnn_block_1_batch_norm - (-1, 112, 112, 64) 0 128 0.50 0 0 |
| r2b_bnn_block_1_prelu - (-1, 112, 112, 64) 0 64 0.25 ? ? |
| r2b_bnn_block_1_pool - (-1, 56, 56, 64) 0 0 0 0 0 |
| r2b_bnn_block_2a_batch_norm - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| r2b_bnn_block_2a_scaling_pool_pool - (-1, 1, 1, 64) 0 0 0 0 0 |
| r2b_bnn_block_2a_scaling_pool_flatten - (-1, 64) 0 0 0 0 0 |
| r2b_bnn_block_2a_scaling_dense_reduce - (-1, 8) 0 512 2.00 0 512 |
| r2b_bnn_block_2a_scaling_dense_expand - (-1, 64) 0 512 2.00 0 512 |
| r2b_bnn_block_2a_conv2d 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| r2b_bnn_block_2a_scaling_reshape - (-1, 1, 1, 64) 0 0 0 ? ? |
| r2b_bnn_block_2a_scaling_multiplication - (-1, 56, 56, 64) 0 0 0 ? ? |
| r2b_bnn_block_2a_prelu - (-1, 56, 56, 64) 0 64 0.25 ? ? |
| r2b_bnn_block_2a_skip_add - (-1, 56, 56, 64) 0 0 0 ? ? |
| r2b_bnn_block_2b_batch_norm - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| r2b_bnn_block_2b_scaling_pool_pool - (-1, 1, 1, 64) 0 0 0 0 0 |
| r2b_bnn_block_2b_scaling_pool_flatten - (-1, 64) 0 0 0 0 0 |
| r2b_bnn_block_2b_scaling_dense_reduce - (-1, 8) 0 512 2.00 0 512 |
| r2b_bnn_block_2b_scaling_dense_expand - (-1, 64) 0 512 2.00 0 512 |
| r2b_bnn_block_2b_conv2d 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| r2b_bnn_block_2b_scaling_reshape - (-1, 1, 1, 64) 0 0 0 ? ? |
| r2b_bnn_block_2b_scaling_multiplication - (-1, 56, 56, 64) 0 0 0 ? ? |
| r2b_bnn_block_2b_prelu - (-1, 56, 56, 64) 0 64 0.25 ? ? |
| r2b_bnn_block_2b_skip_add - (-1, 56, 56, 64) 0 0 0 ? ? |
| r2b_bnn_block_2_out - (-1, 56, 56, 64) 0 0 0 ? ? |
| r2b_bnn_block_3a_batch_norm - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| r2b_bnn_block_3a_scaling_pool_pool - (-1, 1, 1, 64) 0 0 0 0 0 |
| r2b_bnn_block_3a_scaling_pool_flatten - (-1, 64) 0 0 0 0 0 |
| r2b_bnn_block_3a_scaling_dense_reduce - (-1, 8) 0 512 2.00 0 512 |
| r2b_bnn_block_3a_scaling_dense_expand - (-1, 64) 0 512 2.00 0 512 |
| r2b_bnn_block_3a_conv2d 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| r2b_bnn_block_3a_scaling_reshape - (-1, 1, 1, 64) 0 0 0 ? ? |
| r2b_bnn_block_3a_scaling_multiplication - (-1, 56, 56, 64) 0 0 0 ? ? |
| r2b_bnn_block_3a_prelu - (-1, 56, 56, 64) 0 64 0.25 ? ? |
| r2b_bnn_block_3a_skip_add - (-1, 56, 56, 64) 0 0 0 ? ? |
| r2b_bnn_block_3b_batch_norm - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| r2b_bnn_block_3b_scaling_pool_pool - (-1, 1, 1, 64) 0 0 0 0 0 |
| r2b_bnn_block_3b_scaling_pool_flatten - (-1, 64) 0 0 0 0 0 |
| r2b_bnn_block_3b_scaling_dense_reduce - (-1, 8) 0 512 2.00 0 512 |
| r2b_bnn_block_3b_scaling_dense_expand - (-1, 64) 0 512 2.00 0 512 |
| r2b_bnn_block_3b_conv2d 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| r2b_bnn_block_3b_scaling_reshape - (-1, 1, 1, 64) 0 0 0 ? ? |
| r2b_bnn_block_3b_scaling_multiplication - (-1, 56, 56, 64) 0 0 0 ? ? |
| r2b_bnn_block_3b_prelu - (-1, 56, 56, 64) 0 64 0.25 ? ? |
| r2b_bnn_block_3b_skip_add - (-1, 56, 56, 64) 0 0 0 ? ? |
| r2b_bnn_block_3_out - (-1, 56, 56, 64) 0 0 0 ? ? |
| r2b_bnn_block_4a_batch_norm - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| r2b_bnn_block_4a_scaling_pool_pool - (-1, 1, 1, 64) 0 0 0 0 0 |
| r2b_bnn_block_4a_scaling_pool_flatten - (-1, 64) 0 0 0 0 0 |
| r2b_bnn_block_4a_scaling_dense_reduce - (-1, 8) 0 512 2.00 0 512 |
| r2b_bnn_block_4a_scaling_dense_expand - (-1, 128) 0 1024 4.00 0 1024 |
| r2b_bnn_block_4a_conv2d 1 (-1, 28, 28, 128) 73728 0 9.00 57802752 0 |
| r2b_bnn_block_4a_scaling_reshape - (-1, 1, 1, 128) 0 0 0 ? ? |
| r2b_bnn_block_4a_shortcut_pool - (-1, 28, 28, 64) 0 0 0 0 0 |
| r2b_bnn_block_4a_scaling_multiplication - (-1, 28, 28, 128) 0 0 0 ? ? |
| r2b_bnn_block_4a_shortcut_conv2d - (-1, 28, 28, 128) 0 8192 32.00 0 6422528 |
| r2b_bnn_block_4a_prelu - (-1, 28, 28, 128) 0 128 0.50 ? ? |
| r2b_bnn_block_4a_shortcut_batch_norm - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| r2b_bnn_block_4a_skip_add - (-1, 28, 28, 128) 0 0 0 ? ? |
| r2b_bnn_block_4b_batch_norm - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| r2b_bnn_block_4b_scaling_pool_pool - (-1, 1, 1, 128) 0 0 0 0 0 |
| r2b_bnn_block_4b_scaling_pool_flatten - (-1, 128) 0 0 0 0 0 |
| r2b_bnn_block_4b_scaling_dense_reduce - (-1, 16) 0 2048 8.00 0 2048 |
| r2b_bnn_block_4b_scaling_dense_expand - (-1, 128) 0 2048 8.00 0 2048 |
| r2b_bnn_block_4b_conv2d 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| r2b_bnn_block_4b_scaling_reshape - (-1, 1, 1, 128) 0 0 0 ? ? |
| r2b_bnn_block_4b_scaling_multiplication - (-1, 28, 28, 128) 0 0 0 ? ? |
| r2b_bnn_block_4b_prelu - (-1, 28, 28, 128) 0 128 0.50 ? ? |
| r2b_bnn_block_4b_skip_add - (-1, 28, 28, 128) 0 0 0 ? ? |
| r2b_bnn_block_4_out - (-1, 28, 28, 128) 0 0 0 ? ? |
| r2b_bnn_block_5a_batch_norm - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| r2b_bnn_block_5a_scaling_pool_pool - (-1, 1, 1, 128) 0 0 0 0 0 |
| r2b_bnn_block_5a_scaling_pool_flatten - (-1, 128) 0 0 0 0 0 |
| r2b_bnn_block_5a_scaling_dense_reduce - (-1, 16) 0 2048 8.00 0 2048 |
| r2b_bnn_block_5a_scaling_dense_expand - (-1, 128) 0 2048 8.00 0 2048 |
| r2b_bnn_block_5a_conv2d 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| r2b_bnn_block_5a_scaling_reshape - (-1, 1, 1, 128) 0 0 0 ? ? |
| r2b_bnn_block_5a_scaling_multiplication - (-1, 28, 28, 128) 0 0 0 ? ? |
| r2b_bnn_block_5a_prelu - (-1, 28, 28, 128) 0 128 0.50 ? ? |
| r2b_bnn_block_5a_skip_add - (-1, 28, 28, 128) 0 0 0 ? ? |
| r2b_bnn_block_5b_batch_norm - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| r2b_bnn_block_5b_scaling_pool_pool - (-1, 1, 1, 128) 0 0 0 0 0 |
| r2b_bnn_block_5b_scaling_pool_flatten - (-1, 128) 0 0 0 0 0 |
| r2b_bnn_block_5b_scaling_dense_reduce - (-1, 16) 0 2048 8.00 0 2048 |
| r2b_bnn_block_5b_scaling_dense_expand - (-1, 128) 0 2048 8.00 0 2048 |
| r2b_bnn_block_5b_conv2d 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| r2b_bnn_block_5b_scaling_reshape - (-1, 1, 1, 128) 0 0 0 ? ? |
| r2b_bnn_block_5b_scaling_multiplication - (-1, 28, 28, 128) 0 0 0 ? ? |
| r2b_bnn_block_5b_prelu - (-1, 28, 28, 128) 0 128 0.50 ? ? |
| r2b_bnn_block_5b_skip_add - (-1, 28, 28, 128) 0 0 0 ? ? |
| r2b_bnn_block_5_out - (-1, 28, 28, 128) 0 0 0 ? ? |
| r2b_bnn_block_6a_batch_norm - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| r2b_bnn_block_6a_scaling_pool_pool - (-1, 1, 1, 128) 0 0 0 0 0 |
| r2b_bnn_block_6a_scaling_pool_flatten - (-1, 128) 0 0 0 0 0 |
| r2b_bnn_block_6a_scaling_dense_reduce - (-1, 16) 0 2048 8.00 0 2048 |
| r2b_bnn_block_6a_scaling_dense_expand - (-1, 256) 0 4096 16.00 0 4096 |
| r2b_bnn_block_6a_conv2d 1 (-1, 14, 14, 256) 294912 0 36.00 57802752 0 |
| r2b_bnn_block_6a_scaling_reshape - (-1, 1, 1, 256) 0 0 0 ? ? |
| r2b_bnn_block_6a_shortcut_pool - (-1, 14, 14, 128) 0 0 0 0 0 |
| r2b_bnn_block_6a_scaling_multiplication - (-1, 14, 14, 256) 0 0 0 ? ? |
| r2b_bnn_block_6a_shortcut_conv2d - (-1, 14, 14, 256) 0 32768 128.00 0 6422528 |
| r2b_bnn_block_6a_prelu - (-1, 14, 14, 256) 0 256 1.00 ? ? |
| r2b_bnn_block_6a_shortcut_batch_norm - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| r2b_bnn_block_6a_skip_add - (-1, 14, 14, 256) 0 0 0 ? ? |
| r2b_bnn_block_6b_batch_norm - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| r2b_bnn_block_6b_scaling_pool_pool - (-1, 1, 1, 256) 0 0 0 0 0 |
| r2b_bnn_block_6b_scaling_pool_flatten - (-1, 256) 0 0 0 0 0 |
| r2b_bnn_block_6b_scaling_dense_reduce - (-1, 32) 0 8192 32.00 0 8192 |
| r2b_bnn_block_6b_scaling_dense_expand - (-1, 256) 0 8192 32.00 0 8192 |
| r2b_bnn_block_6b_conv2d 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| r2b_bnn_block_6b_scaling_reshape - (-1, 1, 1, 256) 0 0 0 ? ? |
| r2b_bnn_block_6b_scaling_multiplication - (-1, 14, 14, 256) 0 0 0 ? ? |
| r2b_bnn_block_6b_prelu - (-1, 14, 14, 256) 0 256 1.00 ? ? |
| r2b_bnn_block_6b_skip_add - (-1, 14, 14, 256) 0 0 0 ? ? |
| r2b_bnn_block_6_out - (-1, 14, 14, 256) 0 0 0 ? ? |
| r2b_bnn_block_7a_batch_norm - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| r2b_bnn_block_7a_scaling_pool_pool - (-1, 1, 1, 256) 0 0 0 0 0 |
| r2b_bnn_block_7a_scaling_pool_flatten - (-1, 256) 0 0 0 0 0 |
| r2b_bnn_block_7a_scaling_dense_reduce - (-1, 32) 0 8192 32.00 0 8192 |
| r2b_bnn_block_7a_scaling_dense_expand - (-1, 256) 0 8192 32.00 0 8192 |
| r2b_bnn_block_7a_conv2d 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| r2b_bnn_block_7a_scaling_reshape - (-1, 1, 1, 256) 0 0 0 ? ? |
| r2b_bnn_block_7a_scaling_multiplication - (-1, 14, 14, 256) 0 0 0 ? ? |
| r2b_bnn_block_7a_prelu - (-1, 14, 14, 256) 0 256 1.00 ? ? |
| r2b_bnn_block_7a_skip_add - (-1, 14, 14, 256) 0 0 0 ? ? |
| r2b_bnn_block_7b_batch_norm - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| r2b_bnn_block_7b_scaling_pool_pool - (-1, 1, 1, 256) 0 0 0 0 0 |
| r2b_bnn_block_7b_scaling_pool_flatten - (-1, 256) 0 0 0 0 0 |
| r2b_bnn_block_7b_scaling_dense_reduce - (-1, 32) 0 8192 32.00 0 8192 |
| r2b_bnn_block_7b_scaling_dense_expand - (-1, 256) 0 8192 32.00 0 8192 |
| r2b_bnn_block_7b_conv2d 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| r2b_bnn_block_7b_scaling_reshape - (-1, 1, 1, 256) 0 0 0 ? ? |
| r2b_bnn_block_7b_scaling_multiplication - (-1, 14, 14, 256) 0 0 0 ? ? |
| r2b_bnn_block_7b_prelu - (-1, 14, 14, 256) 0 256 1.00 ? ? |
| r2b_bnn_block_7b_skip_add - (-1, 14, 14, 256) 0 0 0 ? ? |
| r2b_bnn_block_7_out - (-1, 14, 14, 256) 0 0 0 ? ? |
| r2b_bnn_block_8a_batch_norm - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| r2b_bnn_block_8a_scaling_pool_pool - (-1, 1, 1, 256) 0 0 0 0 0 |
| r2b_bnn_block_8a_scaling_pool_flatten - (-1, 256) 0 0 0 0 0 |
| r2b_bnn_block_8a_scaling_dense_reduce - (-1, 32) 0 8192 32.00 0 8192 |
| r2b_bnn_block_8a_scaling_dense_expand - (-1, 512) 0 16384 64.00 0 16384 |
| r2b_bnn_block_8a_conv2d 1 (-1, 7, 7, 512) 1179648 0 144.00 57802752 0 |
| r2b_bnn_block_8a_scaling_reshape - (-1, 1, 1, 512) 0 0 0 ? ? |
| r2b_bnn_block_8a_shortcut_pool - (-1, 7, 7, 256) 0 0 0 0 0 |
| r2b_bnn_block_8a_scaling_multiplication - (-1, 7, 7, 512) 0 0 0 ? ? |
| r2b_bnn_block_8a_shortcut_conv2d - (-1, 7, 7, 512) 0 131072 512.00 0 6422528 |
| r2b_bnn_block_8a_prelu - (-1, 7, 7, 512) 0 512 2.00 ? ? |
| r2b_bnn_block_8a_shortcut_batch_norm - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| r2b_bnn_block_8a_skip_add - (-1, 7, 7, 512) 0 0 0 ? ? |
| r2b_bnn_block_8b_batch_norm - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| r2b_bnn_block_8b_scaling_pool_pool - (-1, 1, 1, 512) 0 0 0 0 0 |
| r2b_bnn_block_8b_scaling_pool_flatten - (-1, 512) 0 0 0 0 0 |
| r2b_bnn_block_8b_scaling_dense_reduce - (-1, 64) 0 32768 128.00 0 32768 |
| r2b_bnn_block_8b_scaling_dense_expand - (-1, 512) 0 32768 128.00 0 32768 |
| r2b_bnn_block_8b_conv2d 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| r2b_bnn_block_8b_scaling_reshape - (-1, 1, 1, 512) 0 0 0 ? ? |
| r2b_bnn_block_8b_scaling_multiplication - (-1, 7, 7, 512) 0 0 0 ? ? |
| r2b_bnn_block_8b_prelu - (-1, 7, 7, 512) 0 512 2.00 ? ? |
| r2b_bnn_block_8b_skip_add - (-1, 7, 7, 512) 0 0 0 ? ? |
| r2b_bnn_block_8_out - (-1, 7, 7, 512) 0 0 0 ? ? |
| r2b_bnn_block_9a_batch_norm - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| r2b_bnn_block_9a_scaling_pool_pool - (-1, 1, 1, 512) 0 0 0 0 0 |
| r2b_bnn_block_9a_scaling_pool_flatten - (-1, 512) 0 0 0 0 0 |
| r2b_bnn_block_9a_scaling_dense_reduce - (-1, 64) 0 32768 128.00 0 32768 |
| r2b_bnn_block_9a_scaling_dense_expand - (-1, 512) 0 32768 128.00 0 32768 |
| r2b_bnn_block_9a_conv2d 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| r2b_bnn_block_9a_scaling_reshape - (-1, 1, 1, 512) 0 0 0 ? ? |
| r2b_bnn_block_9a_scaling_multiplication - (-1, 7, 7, 512) 0 0 0 ? ? |
| r2b_bnn_block_9a_prelu - (-1, 7, 7, 512) 0 512 2.00 ? ? |
| r2b_bnn_block_9a_skip_add - (-1, 7, 7, 512) 0 0 0 ? ? |
| r2b_bnn_block_9b_batch_norm - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| r2b_bnn_block_9b_scaling_pool_pool - (-1, 1, 1, 512) 0 0 0 0 0 |
| r2b_bnn_block_9b_scaling_pool_flatten - (-1, 512) 0 0 0 0 0 |
| r2b_bnn_block_9b_scaling_dense_reduce - (-1, 64) 0 32768 128.00 0 32768 |
| r2b_bnn_block_9b_scaling_dense_expand - (-1, 512) 0 32768 128.00 0 32768 |
| r2b_bnn_block_9b_conv2d 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| r2b_bnn_block_9b_scaling_reshape - (-1, 1, 1, 512) 0 0 0 ? ? |
| r2b_bnn_block_9b_scaling_multiplication - (-1, 7, 7, 512) 0 0 0 ? ? |
| r2b_bnn_block_9b_prelu - (-1, 7, 7, 512) 0 512 2.00 ? ? |
| r2b_bnn_block_9b_skip_add - (-1, 7, 7, 512) 0 0 0 ? ? |
| r2b_bnn_block_9_out - (-1, 7, 7, 512) 0 0 0 ? ? |
| r2b_bnn_block_10_global_pool_pool - (-1, 1, 1, 512) 0 0 0 0 0 |
| r2b_bnn_block_10_global_pool_flatten - (-1, 512) 0 0 0 0 0 |
| r2b_bnn_block_10_logits - (-1, 1000) 0 513000 2003.91 0 512000 |
| r2b_bnn_block_10_probs - (-1, 1000) 0 0 0 ? ? |
+------------------------------------------------------------------------------------------------------------------------------------+
| Total 10985472 1001448 5252.91 1676279808 138087936 |
+------------------------------------------------------------------------------------------------------------------------------------+
+r2b_bnn summary------------------------------+
| Total params 12 M |
| Trainable params 12 M |
| Non-trainable params 8.7 k |
| Model size 5.13 MiB |
| Model size (8-bit FP weights) 2.26 MiB |
| Float-32 Equivalent 45.73 MiB |
| Compression Ratio of Memory 0.11 |
| Number of MACs 1.81 B |
| Ratio of MACs that are binarized 0.9239 |
+---------------------------------------------+
ImageNet Metrics
Top-1 Accuracy | Top-5 Accuracy | Parameters | Memory |
---|---|---|---|
65.01 % | 85.72 % | 11 995 624 | 5.13 MB |
Arguments
- input_shape
Optional[Sequence[Optional[int]]]
: Optional shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels. - input_tensor
Optional[tf.Tensor]
: optional Keras tensor (i.e. output oflayers.Input()
) to use as image input for the model. - weights
Optional[str]
: one ofNone
(random initialization), "imagenet" (pre-training on ImageNet), or the path to the weights file to be loaded. - include_top
bool
: whether to include the fully-connected layer at the top of the network. - num_classes
int
: optional number of classes to classify images into, only to be specified ifinclude_top
is True, and if noweights
argument is specified.
Returns
A Keras model instance.
Raises
- ValueError: in case of invalid argument for
weights
, or invalid input shape.
References
XNORNet¶
larq_zoo.literature.XNORNet(
*,
input_shape=None,
input_tensor=None,
weights="imagenet",
include_top=True,
num_classes=1000
)
Instantiates the XNOR-Net architecture.
Optionally loads weights pre-trained on ImageNet.
Model Summary
+xnornet stats------------------------------------------------------------------------------------------------------+
| Layer Input prec. Outputs # 1-bit # 32-bit Memory 1-bit MACs 32-bit MACs |
| (bit) x 1 x 1 (kB) |
+-------------------------------------------------------------------------------------------------------------------+
| input_1 - ((None, 224, 224, 3),) 0 0 0 ? ? |
| conv2d - (-1, 56, 56, 96) 0 34848 136.12 0 109283328 |
| batch_normalization - (-1, 56, 56, 96) 0 192 0.75 0 0 |
| activation - (-1, 56, 56, 96) 0 0 0 ? ? |
| max_pooling2d - (-1, 27, 27, 96) 0 0 0 0 0 |
| batch_normalization_1 - (-1, 27, 27, 96) 0 192 0.75 0 0 |
| quant_conv2d 1 (-1, 27, 27, 256) 614400 0 75.00 447897600 0 |
| max_pooling2d_1 - (-1, 13, 13, 256) 0 0 0 0 0 |
| batch_normalization_2 - (-1, 13, 13, 256) 0 512 2.00 0 0 |
| quant_conv2d_1 1 (-1, 13, 13, 384) 884736 0 108.00 149520384 0 |
| batch_normalization_3 - (-1, 13, 13, 384) 0 768 3.00 0 0 |
| quant_conv2d_2 1 (-1, 13, 13, 384) 1327104 0 162.00 224280576 0 |
| batch_normalization_4 - (-1, 13, 13, 384) 0 768 3.00 0 0 |
| quant_conv2d_3 1 (-1, 13, 13, 256) 884736 0 108.00 149520384 0 |
| max_pooling2d_2 - (-1, 6, 6, 256) 0 0 0 0 0 |
| batch_normalization_5 - (-1, 6, 6, 256) 0 512 2.00 0 0 |
| quant_conv2d_4 1 (-1, 1, 1, 4096) 37748736 0 4608.00 37748736 0 |
| batch_normalization_6 - (-1, 1, 1, 4096) 0 8192 32.00 0 0 |
| quant_conv2d_5 1 (-1, 1, 1, 4096) 16777216 0 2048.00 16777216 0 |
| batch_normalization_7 - (-1, 1, 1, 4096) 0 8192 32.00 0 0 |
| activation_1 - (-1, 1, 1, 4096) 0 0 0 ? ? |
| flatten - (-1, 4096) 0 0 0 0 0 |
| dense - (-1, 1000) 0 4096000 16000.00 0 4096000 |
| activation_2 - (-1, 1000) 0 0 0 ? ? |
+-------------------------------------------------------------------------------------------------------------------+
| Total 58236928 4150176 23320.62 1025744896 113379328 |
+-------------------------------------------------------------------------------------------------------------------+
+xnornet summary-------------------------------+
| Total params 62.4 M |
| Trainable params 62.4 M |
| Non-trainable params 19.3 k |
| Model size 22.77 MiB |
| Model size (8-bit FP weights) 10.90 MiB |
| Float-32 Equivalent 237.99 MiB |
| Compression Ratio of Memory 0.10 |
| Number of MACs 1.14 B |
| Ratio of MACs that are binarized 0.9005 |
+----------------------------------------------+
ImageNet Metrics
Top-1 Accuracy | Top-5 Accuracy | Parameters | Memory |
---|---|---|---|
44.96 % | 69.18 % | 62 396 768 | 22.81 MB |
Arguments
- input_shape
Optional[Sequence[Optional[int]]]
: Optional shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels. - input_tensor
Optional[tf.Tensor]
: optional Keras tensor (i.e. output oflayers.Input()
) to use as image input for the model. - weights
Optional[str]
: one ofNone
(random initialization), "imagenet" (pre-training on ImageNet), or the path to the weights file to be loaded. - include_top
bool
: whether to include the fully-connected layer at the top of the network. - num_classes
int
: optional number of classes to classify images into, only to be specified ifinclude_top
is True, and if noweights
argument is specified.
Returns
A Keras model instance.
Raises
- ValueError: in case of invalid argument for
weights
, or invalid input shape.
References