11/27/2023 0 Comments Numerical algebraic geometryBut that is just a fantasy as far as I know. One could fantasize that somehow choosing the correct DAG would amount to an algebraic geometry problem- along the lines of the uses of algebraic geometry in phylogenetics. The basic flow chart of a neural net can be encompassed in a directed acyclic graph i.e. In addition there is the question of how the nodes in the various layers should be connected. Basically, the variables one is given are likely not the variables one wants. There are two main issues with high dimensional data and neural nets. However, these symbolic (Gröbner) methods are severely limited by algorithmic issues such as exponential space complexity and being highly sequential. I might add that I teach data mining, so my impression is somewhat stronger than just a casual impression. The interplay rich between algebraic geometry and string and gauge theories has recently been immensely aided by advances in computational algebra. I'm not personally aware of any approach that comes close to DL for these kinds of problems. Real high dimensional problems are almost always best solved by the use of what is called neural nets and specifically 'deep learning' which amounts to creating neural nets with many layers that will effectively predict. As such, I'm afraid that the answer seems to be 'sadly no'. I'm going to assume that by 'high dimensional problems' you specifically mean learning speech recognition and image recognition. Topological data analysis is a typical example of this: it has generated a lot of excitement among mathematicians but most data scientists have never heard of it, and the reason is simply that higher order topological structures have not so far been found to be relevant to practical inference / classification problems. Another problem is that many new algorithms which draw upon sophisticated mathematics wind up answering questions that nobody was really asking, often because they were developed by mathematicians who tend to focus more on techniques than applications. The marginal gains are much higher for feature extraction, which is really more of a domain-specific problem than a math problem. One problem is that the marginal benefit of a great answer over a good answer is not very high (and sometimes even negative!) unless the data has been studied to death. That said, there are a lot of barriers to entry for new machine learning algorithms at this point. Second, it seems to me that dimension reduction is itself an area where more sophisticated geometric techniques could be brought to bear - many of the existing algorithms already have a geometric flavor. Johnson-Lindenstrauss embeddings, principal component analysis, various sorts of regularization). First, if you invent a cool new algorithm then don't worry too much about the dimension of the data at the outset - practitioners already have a bag of tricks for dealing with it (e.g. This has two implications for your question. It is important because a great many good machine learning algorithms have complexity which depends on the number of parameters used to describe the data (sometimes exponentially!), so reducing the dimension can turn an impractical algorithm into a practical one. One useful remark is that dimension reduction is a critical problem in data science for which there are a variety of useful approaches.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |