Jiyang Luo

Jiyang Luo is a computer scientist specializing in artificial intelligence who has contributed to the academic field of artificial neural networks.
Education
Jiyang Luo attended Guo Guang Secondary School, one of the most prestigious secondary schools in the People's Republic of China. Because of his exemplary performance, he left school a year early after he was invited to attend Tsinghua University, where he majored in computer science, a major available only to the very top achieving students in China. He became specialized in artificial intelligence, which was still in its infancy in the 1980s.
Early achievements
Jiyang Luo worked with punchcard machines, creating a design aspect that was later patented. Luo's achievments also earned him a national science award. Luo created a start-up in order to market one of his inventions. With the factory site already chosen and investors already committed, Luo suddenly left in order to continue his education in the United States.
Work with neural networks
Continuing to be absorbed in academics, Jiyang became an expert in artificial neural networks during the period he spent researching at Wright State University. He wrote a paper entitled "Instant learning for supervised learning neural networks: arank-expansion algorithm", which would later be published in the journal - Neural Networks, 1994. IEEE World Congress on Computational Intelligence. For his work, he was invited to attend the 1994 World Congress on Neural Networks, hosted by the Institute of Electrical and Electronics Engineers, where he was joined by many fellow scientists and alumni from his home country of China. His research was a breakthrough for the artificial neural network academic community, increasing quality of optimization and decreasing error by proposing new methods of infrastructure and neural network archetecture setup.<ref name="paper1"/>
Neural network archetecture and learning algorithms
Jiyang Luo specified parities in learning error, and presented solutions. He specified in his 1994 paper: "An one-hidden layer neural network architecture is presented. An instant learning algorithm is given to decide the weights of a supervised learning neural network. For an n dimensional, N-pattern training set, a maximum of N-r hidden nodes are required to learn all the patterns within a given precision (where r is the rank, usually the dimension, of the input patterns). Using the inverse of activation function, the algorithms transfer the output to the hidden layer, add bias nodes to the input, expand the rank of input dimension. The proposed architecture and algorithm can obtain either exact solution or minimum least square error of the inverse activation of the output. The learning error only occurs when applying the inverse of activation function. Usually, this can be controlled by the given precision."<ref name="paper2"/>
 
< Prev   Next >