All convolutions in the dense block are ReLU-activated and use batch normalization. Channel-wise concatenation is just achievable if the peak and width dimensions of the data stay unchanged, so convolutions within a dense block are all of stride 1. Pooling levels are inserted between dense blocks for additional dimensionality https://financefeeds.com/coins-listing-this-month-best-copyright-presales-to-invest-in-this-week-for-up-to-630-listing-gains/