Loading…
Monday, April 8 • 2:00pm - 2:30pm
Lightning Talks

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Does the win32 clang compiler executable really need to be over 21MB in size?
Russell Gallop
The title of this lighting talk is from a bug filed in the early days of the PS4 compiler. It noted that the LLVM-based PS4 compiler was more than 3 times larger than the PS3 compiler. Since then it has almost doubled to over 40MB. For a compiler which targets one system this seems excessive. Executable size can cost in worse cache performance and cost time if transferring for distributed builds.
In this lightning talk I will look at where this comes from and how it can be managed.

LLVM IR Timing Predictions: Fast Explorations via lli 
Alessandro Cornaglia
Many applications, especially in the embedded domain, have to be executed on different hardware target platforms. For these applications, it is necessary to evaluate both functional and non-functional properties, such as software execution time, in all their hardware/software combinations. Especially in the context of software product line engineering, it is not feasible to test all variants one-by-one. The intermediate representation of the source code offers an attractive opportunity for a single-run analysis, because it covers the software variability, while at the same time omitting the hardware-dependent optimizations. We present an extension for the LLVM IR execution engines, which are part of the LLVM lli tool. The extension evaluates on the fly functional and non-functional properties for all the hardware variants during one lli execution. In particular, our extension is designed for the evaluation of the execution time of a program for multiple target platforms considering different software variants. Both the interpreter and JIT execution modes are supported. Prospectively, our approach will be enriched with multiple analysis techniques. Thanks to our approach, it is now possible to evaluate software variants with regard to multiple hardware platforms in a single lli execution run.

Simple Outer-Loop-Vectorization == LoopUnroll-And-Jam + SLP 
Dibyendu Das
In this brief talk I will show how Outer-Loop-Vectorization (OLV), which is of great interest to the LLVM community, can be visualized as a combination of two transformations applied to a loop-nest of interest. These two transformations are LoopUnrollAndJam and SLP. LoopUnrollAndJam is a fairly new addition to the LLVM loop-optimization repertoire. Combined with a fairly powerful SLP that LLVM supports today, we are able to vectorize the outer loop of several important kernels automatically without the support of any pragma. At present our implementation is at the level of a PoC and does not exploit any rigorous costing mechanism. While we understand that OLV is being implemented in the LoopVectorizer using the VPlan technique, this paper highlights a quick and cheap way to solve the same problem in a different manner using two existing transforms.

Clacc 2019: An Update on OpenACC Support for Clang and LLVM 
Joel E. Denny
We are developing production-quality, standard-conforming OpenACC [1] compiler and runtime support in Clang and LLVM for the US Exascale Computing Project [2][3]. A key strategy of Clacc’s design is to translate OpenACC to OpenMP in order to leverage Clang’s existing OpenMP compiler and runtime support and to minimize implementation divergence. To maximize reuse of the OpenMP implementation and to facilitate research and development into new source-level tools involving both the OpenACC and OpenMP levels, Clacc implements this translation in the Clang AST using Clang’s TreeTransform facility. However, we are also following LLVM IR parallel extensions being developed by the community as a path to improve compiler optimizations and analyses.

The purpose of this talk is to provide an update on Clacc progress over the preceding year including early performance results, to present the plan for the year ahead, and to invite participation from others. Clacc’s OpenACC support is still maturing and we have not yet offered it upstream. However, we have already upstreamed many mutually beneficial improvements from the Clacc project, including improvements to LLVM’s testing infrastructure and to Clang and its OpenMP support. This talk will summarize those contributions as well.

[1]: OpenACC standard: https://www.openacc.org/
[2]: Clacc: Translating OpenACC to OpenMP in Clang. Joel E. Denny, Seyong Lee, and Jeffrey S. Vetter. 2018 IEEE/ACM 5th Workshop on the LLVM Compiler Infrastructure in HPC (LLVM-HPC), Dallas, TX, USA, (2018).
[3]: Clacc: OpenACC Support for Clang and LLVM. Joel E. Denny, Seyong Lee, and Jeffrey S. Vetter. 2018 European LLVM Developers Meeting (EuroLLVM 2018).

Targeting a statically compiled program repository with LLVM 
Russell Gallop
Following on from the 2016 talk "Demo of a repository for statically compiled programs", this lightning talk will present a brief overview of how LLVM was modified to target a program repository. This includes adding a new target output format and a new optimization pass to skip program elements already present in the repository. Reference: https://github.com/SNSystems/llvm-project-prepo

Speakers
avatar for Dibyendu Das

Dibyendu Das

Senior Fellow, AMD
RG

Russell Gallop

Software Engineer, Sony Interactive Entertainment
AC

Alessandro Cornaglia

FZI Forschungszentrum Informatik
JE

Joel E. Denny

Oak Ridge National Laboratory


Monday April 8, 2019 2:00pm - 2:30pm CEST
Theatre