Deep Learning is not so mysterious or different

We’d love to hear from you! Whether you have questions, feedback, partnership inquiries, or just want to share your thoughts, the Robotics Observer team is here to help.


Get in Touch

📧 Email Us
For general inquiries, please email us at team@roboticsobserver.com.

Get in touch

Email

contact@example.com
press@example.com
advertise@example.com

Phone

+1-202-555-0132
+1-202-555-0158
+1-202-555-0171

Social media

Find us

15.2 C
New York

Deep Learning is not so mysterious or different

Published:

Deep neural networks are often seen to be different from other models classes because they defy conventional notions of generalization. Examples of anomalous generalization behavior include benign overfitting (also known as double descent), success with overparametrization, and benign overfitting. We argue that these phenomena do not belong to neural networks or are particularly mysterious. This generalization behaviour is intuitively understandable and can be rigorously characterized by using generalization frameworks like PAC-Bayes or countable hypothesis bounds. Soft inductive biases are a key principle that explains these phenomena. Instead of restricting the space for hypotheses to avoid overfitting we embrace a flexible space with a soft preference towards simpler solutions which are consistent with data. This principle is encoded in a variety of model classes. Deep learning is therefore not as mysterious as it may seem. We also show how deep learning differs in other ways. For example, its ability to learn representations, mode connectivity, or its relative universality.

Submission History

Andrew Wilson [view email]
[v1]

Monday, 3 March 2025 22:56.04 UTC (1.206 KB)

www.roboticsobserver.com

Related articles

spot_img

Recent articles

spot_img