Quick entry. My colleague Cesar and I from USC High-Performance Computing attended the GPU Technology Conference in San Jose, California yesterday (www.gputechconf.com). I read that 5,000 attended last year and Nvidia, the conference host, expects a record number this year. The focus was Nvidia’s GPU (Graphical Processing Unit) technology and applications. Deep Learning: If I had not heard the term from our researchers recently, I heard it in a BIG way yesterday. Put succinctly, it’s about reasoning with uncertainty using neural networks. I attended specifically for the hands-on training sessions, which were a mixed success. I didn’t get through any of the tutorials in time — they could all use another iteration and some scaling back. But gold nuggets included use of the Jupyter Notebook for training (five stars for this terrific technology), discovering OpenACC (see http://on-demand.gputechconf.com/gtc/2016/webinar/openacc-course/Introduction-to-OpenACC-Course-20161026-1550-1-QA.pdf) , and the challenge of building a training model for feature recognition. There was a lot to see, and in retrospect it would have been nice to have had another day or two. Meanwhile, I look forward to hearing what our colleagues from Clemson learned during their time at the conference.