Abstract
The purpose of this work is to develop and verify a method for quantitatively analyzing data collected from Kolsky bar experiments and to confirm its validity by comparing it to a finite element (FE) model. This study also aims to demonstrate the need for higher sample rate capture in miniature Kolsky bars, 3.16 mm diameter used in this work, by comparing results from two different data acquisition setups on identically sized experimental setups. We identified that the sample capture rate needed to accurately depict experimental results on small-scale systems is at least 400 kHz, which is far greater than what is typically assumed for lager systems. Finally, a statistical method for evaluating results is presented and expanded upon, which removes the dependence on the knowledge and experience of the experimentalist to interpret the data. Using this analysis technique on the two different systems examined in this study, we find upwards of 3.5 times better loading condition reproducibility and up to a 20 MPa reduction in the standard deviation of the sample stress profile, confirming the need for higher quality frequency capture rates.
Introduction
Kolsky bars, also known as split Hopkinson pressure bars (SHPB), are experimental setups used to characterize materials at higher strain rates and have grown in popularity in recent decades [1–4]. These classical Kolsky Bars are typically more than 15 mm in diameter and test material in the 102–103 s−1 regime [5–8]. A miniature system offers several benefits, including using smaller samples and testing at higher strain rates compared to traditional Kolsky bars, but with the additional complexity of carrying higher frequency content in the stress waves measured during experiments [9–13]. These higher frequency waves infer higher bandwidth capture needs, accompanied by rigorous data analytics and associated techniques to ensure high accuracy experimental data [14,15]. The context for this work is the need for improved methods of data collection and analysis to collect accurate information on these high frequency content waves captured during experiments involving small Kolsky bars.
This study begins with an investigation into the ideal sample rate in order to capture accurate data for this specific 3.16-mm diameter system. The literature is sparse when addressing this critical requirement where most researchers assume 100 kHz to be an acceptable maximum frequency, but this appears to be significantly under-spec for miniature systems [14,16,17]. This investigation into ideal sampling rates is conducted using FE models generated in Abaqus to aid in validation of experimental data and observed trends in the loading pules of the experiment, which is independent of the material properties of the sample. This FE model, validated by comparing to the experimental loading pulses, is used to determine if the observed differences in experimentally determined material response are due to constraints of the experimental data and techniques or the material model used in the simulations. Finally, this research seeks to expand upon and verify a statistically significant, quantitative method for evaluating Kolsky bar data that is independent of the need for the experimentalist to make judgment calls on the quality and contents of results. The results are compared to data previously taken on an experimental system with lower frequency capture hardware, but still industry accepted. The analysis quantitatively shows the improvement in loading and test condition repeatability with the high sample rate capture system.
The significance of this research is the development of a method to statistically assess Kolsky bar data. This method enables more accurate and quantitative data evaluation while removing the need for operator interpretation of individual data sets or experiments. The increase in data fidelity and accuracy while using higher sample rate capture hardware is also shown, demonstrating the need for higher quality hardware on similarly sized systems. This, in combination with our finite element model, which utilizes strain data from the same location as the strain gauges used in the experiments, provides a diagnostic capability for evaluating new systems and designs as well as validating material property models. This method and associated findings of this work add to the greater knowledge of material response during high strain rate applications and how to verify finite element models of such experiments.
Methods
We first provide an overview of the Kolsky Bar experimental method as well as the two specific Kolsky systems being compared in this study, paying particular attention to the difference in sampling rate capabilities. The process for generating our finite element model is then outlined as well as the configuration and postprocessing steps for all data collected and prepared for analysis. Finally, we describe the statistical analysis method, which has been developed to evaluate the experimental data quantitatively, reducing the need for qualitative judgments of the operator on data quality and integrity.
Methods 1.1 Kolsky Bar.
A Kolsky system comprised a striker, incident, and transmitted bar. These bars are all made of the same alloy, usually steel and in some cases aluminum for testing softer materials, and all three bars usually have the same diameter. A test sample, typically cylindrical and of necessity a smaller diameter than the bars, is placed between the transmitted and incident bars with a thin layer of grease at each interface to aid in sample retention and minimize the friction between surfaces during the test. A basic Kolsky bar diagram is shown in Fig. 1.
where is the velocity of the strike bar just before impact and C0 is the sound speed of the bars when the bars are made of the same material.
where At and As are the cross-sectional areas of the transmission bar and sample, Et is the elastic modulus of the transmission bar, and is the measured transmitted pulse.
where ρ is the density, C is the sound speed, A is the cross-sectional area, and E is the elastic modulus.
We build on the work presented in 2020 by Hannah et al., which partnered with a high throughput testing procedure, introduced a statistical analysis method for determining critical experimental parameters such as the standard deviation and variance as functions of time for the measured pulses in the experiment [18]. These statistical parameters are collected for the entire time history of the pulses, allowing for a detailed analysis of the data at any point. This approach allows users to examine individual portions of interest in the load event and shows how the data spread evolves throughout the experiment. Several configurations were tested in that work, including the use of a momentum trap, which is a device that prevents additional compressive loads after the main experiment has concluded and is useful when trying to directly correlate a sample’s damaged physical characteristics to a single loading event. For the data presented in this new work, a more classical Kolsky bar is used without a momentum trap attached, allowing for a direct comparison to the older dataset, which effectively used a traditional bar setup for the main analysis. The same sample material, an aluminum alloy AL 2024, is used to again ensure valid comparisons between the two experimental datasets.
There are key differences between the 2020 system, referred to as System 1, and the new and upgraded system, referred to as System 2, used in this work. A detailed table describing these differences is given below in Table 1, but the most notable difference is the change in the data acquisition system (DAQ). System 1 used Vishay 2310 amplification and filter cards, which is a recommended industry standard for Kolsky bar experimentalists [17]. As will be shown later, we have found this DAQ and the associated frequency response characteristics to be insufficient in accurately measuring the wave dynamics in our 3.16 mm diameter Kolsky system, especially when capturing high frequency content. We will show in the next section how finite element simulations were used to help identify this gap, and how with the aid of the simulation we were able to identify a better quality DAQ to properly capture the experimental strain data. The automated launching system is also a notable upgrade, as it reduces the spread in impact velocity as the chamber pressure is known to ±0.5 psi and the release of the pressure valve is handled electronically instead of the valve being opened manually by an operator as was the case in System 1.
PSU Mini Kolsky system 1 | PSU Mini Kolsky system 2 | |
---|---|---|
DAQ | Vishay, 75 kHz max response | Dewetron, 1 Mhz max response |
Striker | 151 mm, 2 plastic bushings | 151 mm, thin Teflon tape |
Firing system | Manual, opening a valve | Automated electronic system |
Bar material | 316 L steel | C-350 maraging steel |
Strain gauges | 350 Ohm, 1 mm gauge section | 350 Ohm, 1 mm gauge section |
PSU Mini Kolsky system 1 | PSU Mini Kolsky system 2 | |
---|---|---|
DAQ | Vishay, 75 kHz max response | Dewetron, 1 Mhz max response |
Striker | 151 mm, 2 plastic bushings | 151 mm, thin Teflon tape |
Firing system | Manual, opening a valve | Automated electronic system |
Bar material | 316 L steel | C-350 maraging steel |
Strain gauges | 350 Ohm, 1 mm gauge section | 350 Ohm, 1 mm gauge section |
We will demonstrate how these statistical methods can be used in concert with the high throughput nature of Kolsky bar testing to deliver repeatable and rigorously defined datasets that can be relied upon for sample property characterization. The rapid testing nature of a Kolsky bar, which can conduct a test in a matter of minutes, lends itself to the utilization of robust statistical methods utilizing at least 30 experiments to build stochastic metrics. These metrics are functions of time, which capture the evolution of sample response variability over the entire test history. Such statistically significant data also provide a strong database to compare to simulation results, adding additional credibility to material models and simulation structures. Additionally, the small nature of this Kolsky bar and by necessity the samples minimizes the cost associated with exotic or expensive materials while still providing high quality and statistically relevant data.
Methods 1.2 Finite Elements.
While FE models of Kolsky bars have been generated in the past, we used our models as predictive tools to aid in the design of our experimental setup as well as a point of comparison for our experimental results. We used the finite element code Abaqus, following a procedure similar to Hannah et al. [15], to generate a basic axis-symmetric model to quickly probe the response of the system. The results shown in Fig. 2 indicate that data must be collected at a rate of 2 MHz to ensure accurate data capture. Not only does the sampling rate need to be at 2 MHz, but the measurement needs to also be taken with a device that can capture at least 400 kHz without significant attenuation to ensure that the high frequency content can be represented accurately and to avoid significant aliasing. This 400 kHz requirement is drawn from the FE model data in Fig. 2, where the frequency of the measured signal is approximately 400 kHz. This is logical, as a 2 MHz sampling rate can accurately capture a 400 kHz wave while avoiding signal aliasing. It is with this information in hand that we specified and procured our new DAQ for use in System 2, which is capable of sampling at 2 MHz with a maximum frequency response of 1 MHz, or capable of detecting signal content of 1 MHz. This will result in over thirteen times the maximum frequency capture limit compared to system 1 using a Vishay 2310-based DAQ, which had a maximum frequency response of 75 kHz at -3 dB. It is important to note that our strain gauges, 350 Ohm and a 1 mm long gauge section, also have a limited high frequency signal capture, which is related to the inherent characteristics of these gauges. This frequency limit of the gauge is -0.5 dB change at 850 kHz. This paring of DAQ and strain gauge will ensure that we can accurately capture the high frequency content predicted by the finite element simulation.
Our main FE model is a half-symmetry explicit dynamic model, which was selected as a compromise between the additional accuracy of using three dimensional elements, including the capture of any off axial effects, and the increased simulation speed gained by taking advantage of some of the symmetry of the problem. The incident and transmission bars, measuring 812.8 mm in length and 1.58 mm in radius, are meshed with an element size of 0.16 mm along the length and 13 elements along the radius. These parameters led to a strong convergence in mesh density at our specified sampling rate. 6 elements centered on the location on the surface of the bars matching the strain gauge placement were averaged together to more accurately represent the experimentally implemented strain gauges and their 1 mm grid section. The striker with dimensions of 99.16 mm in length by a 1.58 mm radius was also meshed with these parameters. This also led to convergence and was the best fit for the experimental results as will be shown in later sections. An aluminum sample measuring 2.27 mm in length by 1.14 mm in radius was generated and placed between the incident and transmission bars. The sample was meshed using 60 micrometer elements along the length and 18 elements radially. This again displayed the best convergence. The process of identifying this mesh is similar to that of [15]. The total number of elements, including the simulation of eight support bushings where the meshing was coarse as these parts only provide a frictional contact interface, is roughly 3.7 million elements and a single simulation takes 48–50 h running on a 40 core high performance computing cluster. The material properties used in these simulations are shown in Table 2 with the aluminum model properties and rate-dependent Johnson-Cook material model being the generally accepted values from Millan et al. [19].
Elastic modulus | Density kg/m3 | Poisons ratio | Johnson Cook, A | Johnson Cook, B | Johnson Cook, n | Johnson Cook, C | Johnson Cook, m | Johnson Cook, ref strain rate | |
---|---|---|---|---|---|---|---|---|---|
Bars | 202 GPa | 8091 | 0.272 | n/a | n/a | n/a | n/a | n/a | n/a |
sample | 70 GPa | 2700 | 0.3 | 352 MPa | 440 MPa | 0.42 | 0.0083 | 1.7 | 3.3 × 10−4 |
Elastic modulus | Density kg/m3 | Poisons ratio | Johnson Cook, A | Johnson Cook, B | Johnson Cook, n | Johnson Cook, C | Johnson Cook, m | Johnson Cook, ref strain rate | |
---|---|---|---|---|---|---|---|---|---|
Bars | 202 GPa | 8091 | 0.272 | n/a | n/a | n/a | n/a | n/a | n/a |
sample | 70 GPa | 2700 | 0.3 | 352 MPa | 440 MPa | 0.42 | 0.0083 | 1.7 | 3.3 × 10−4 |
Methods 1.3 Data Configuration.
The trigger for the start of experimental data collection is based on reaching a certain strain value that corresponds with the initiation of the loading pulse. This value is on the initial slope of the data, which is a portion of significant rate of change, approaching a vertical line. This results in a spread of triggering values of data capture of about 150 microstrain for the same point in time, accurate to 0.5 microseconds on our 2 MHz system. This is enough of a difference to generate significant standard deviations for the rising and falling sections of slopes, which do not make sense when visually comparing the datasets. To correct this effective time shifting of the data, or in other words to properly align each section of interest in time, the datasets are linearly interpolated to a time-step of 5 nanoseconds. Then an alignment value in microstrain is selected on the initial slope of the pulse of interest and each dataset is aligned such that the closest number to that value is placed in the same row of the data matrix. This aligns the data as each row corresponds to a time increment. Varying levels of microstrain are selected as the alignment value, finding the optimal value based on minimizing the average standard deviation of the data during the pulse. This process aligns the data for more accurate data analysis without impacting the data quality or values in any way. Further, a 510 kHz lowpass filter is then applied to all datasets both experimental and computational. This removes the contribution of ultrahigh frequency content that cannot otherwise be properly captured with System 2 and its 2 MHz sampling rate. This also prevents any high frequency signal attenuation from the strain gauges themselves from affecting the results of the experiment. A 510 kHz cutoff frequency is a conservative value, considering classical theory indicates at least 4 points per oscillation for sinusoidal inputs, which corresponds to a 500 kHz cutoff frequency. This cutoff frequency was selected to ensure that no valuable or relevant data was lost including all noticeable trends.
Methods 1.4 Statistical Analysis.
After preprocessing the data in the manner described above, a statistical analysis used, which is similar to the analysis presented in Hannah et al. This method depends on having a normal distribution of the dataset, and as such a high throughput testing procedure is used to generate a minimum of 30 tests. This volume of data is needed to ensure the assumption of a normal distribution for the analysis. After conducting a minimum of 30 tests, 3 sigma bounds will be generated to quantitatively identify any potential significant outliers. Over 99% of valid tests should fall within these bounds, so this provides an easy metric to identify a flawed or anomalous test. In the analysis of the newly collected data, tests are only removed if they fall out of the 3 sigma bounds during the loading pulse and only after 15 microseconds have elapsed. This conservative approach ensures no material data is removed and also discounts any slight difference in wave dynamics during the initial portion of the loading pulse, and differs from the evaluation used in the data collected on System 1, as the entire loading pulse was evaluated because there was no high frequency content in early time due to the lack of frequency capture capability of that DAQ.
Following this process, and assuming 30 valid datasets remain, 2 sigma bounds will be generated and used to compare to any FE results. Building the 95% Confidence Interval, or CI, allows us to quantify the error in our measurements and provides an experimentally informed range as a function for comparison with finite element results. Comparing FE results to a range of values also makes the modeling process easier, as modelers do not spend additional time attempting to fit a simulation to a potentially anomalous single experimental dataset. It has also been shown that the behavior of the incident and transmitted pulses is nearly identical for a normal Kolsky bar system so the same techniques will be applied to determining the system variance in these experiments [18].
The system variance will be conservatively generated based on the variance of the incident or loading pulse. For stress, the incident pulse variance is first scaled with the same constant shown in Eq. (4) using the average sample area. The conservative system variance profile is set to zero for the first three microseconds and then set to a constant equaling the minimum variance over the remainder of the pulse duration. This limits the adjustment to occur only during the main pulse duration period and in the most conservative measured variance.
We then use Eqs. (6) and (7) to find the sample variance profiles, which are now free of the contributions of system variance. This analysis is based on the assumption that system variance and sample variance are independent of each other, which indeed they are as there is no reliance on the sample by the system or vice versa.
Another slight change to the analysis is that the Dewetron amplifier and DAQ of System 2 automatically outputs strain from our ¼ wheatstone bridge setups, instead of volts. This change does not affect the analysis other than simplifying and removing the need for conversion factors from volts to strain. A direct comparison can still be made between the total stress variance from the data collected with System 1 and the data collected in the new experiments presented here.
Results and Discussion
Results 1.1 Statistical Analysis and Comparison Between System 1 and System 2.
The data collected with the new system compare quite well to that collected previously. As can be seen in Fig. 3, the profile for the incident wave is much more representative of that collected in Fig. 2, containing high frequency signal content. The bottom plot also shows a relatively consistent 95% CI bound as a percentage of the mean value, with the average being 2.75% throughout the main loading period, which begins 3 microseconds into the pulse. This is a significant improvement over the 9% for the same metric for data under system 1, while also having a far more accurate representation of the realized loading signal pulse.
With these results in hand, we are able to generate the system variance profile, which can be appropriately scaled for use in Eqs. (6) and (7). The stress and strain rate standard deviation plots, including the system standard deviation and the adjusted material curves, are shown in Figs. 4 and 5, with the properly adjusted strain and stress plots in the following section.
These plots show the low standard deviation in stress after 15–20 microseconds, which is when plastic deformation dominates and the samples are in dynamic equilibrium. The strain profile grows with time, which is due to the slightly different strain rates achieved for each experiment and the propagated impact on statistical significant confidence intervals. The higher strain rate tests will accumulate more total strain over the length of the test, which is to be expected. The rate-dependent nature of the sample will cause some cross-coupling of the strain rate and stress profiles, but this should be minimal as the sustained strain rate does not vary more than approximately ±250 strain per second on an approximate 2500 strain per second pulse. When taking into account the rate-dependent material model in Table 2, this difference in strain rate equates to a 0.15% predicted difference in stress, which is negligible.
We see that the system-level adjustment is small in comparison to measured values. This is due to the significantly lower spread of the incident loading pulse seen in Fig. 3. In the System 1 results provided in Hannah et al. [18], the average percentage value was 9.35% versus the 2.75% for system 2, which is shown in Fig. 3. This means that System 2 has nearly 3.5 times more accurate wave capture compared to System 1, demonstrating the clear need for accurate high frequency capture. The increased accuracy and decreased standard deviation of System 2 results in a much lower system-level standard deviation and variance contribution compared to System 1.
The measured pulse for stress in System 2 also contains less spread than that in System 1. On average, the improvement is nearly 20 MPa for the plastically dominated section of the curve. We also see a decrease in the peak standard deviation of 5–10 MPa, and a sharper and more continuous decline to the plastically dominated time period compared with System 1. This increase in data capture accuracy is attributed to the higher signal capture capacity in System 2 compared to that of System 1, which again clearly demonstrates the need to capture high frequency content for small Kolsky systems.
Results 1.2 Finite Element Results and Comparison.
Results from the finite element simulation of the loading pulse are shown in Fig. 6 plotted on top of our experimental bounds. These data show that we can capture the critical wave dynamics of the experiment, as well as creating very accurate simulation conditions under load. The simulation performs especially well in the critical section of the pulse, after 10–15 microseconds, where it compares well with the 95% confidence interval indicating an accurately captured load during what will be the significant plastic deformation period of the sample.
The importance of this is that we now have confidence that the discrepancies between material response in the data and the finite element model is a result of the sample material model in the finite element simulation and not the mesh density or temporal resolution of the information collected from the simulation. Comparisons of the simulation results and the experimental data and the 95%CI bounds are shown in Fig. 7, with the simulation being dashed blue and the experimental data being solid lines. We can see that the simulation appears to be too stiff with the stress profile being higher and the strain being lower. The simulation does follow the correct trends however, which can be more easily seen by looking at the stress profile. The work presented by Seidt and Gilat [20] indicates that this type of aluminum does not exhibit rate dependency until approximately 5000 strain per second. Since our test peaks anywhere from 5000 to 8000 strain per second with sustained strain rates around 2500, we adjust the strain rate sensitivity factor to 0.00415, or half of the starting value as a middle ground between the two constitutive models. This adjusted material model presents a better fit to the experimentally determined bounds and can be seen as the yellow dashed line in Fig. 7.
Conclusion
In this work, we have developed and finalized a method for quantitatively analyzing Kolsky bar experimental data and confirmed its validity with an FE model generated in Abaqus. We have also shown that miniature systems, such as the 3.16 mm system presented here, require a much higher frequency response capture capability than is generally accepted to accurately record Kolsky bar data for bars of this smaller scale. Moving from the 75 kHz frequency capture System 1 to the 1 MHz capture System 2 resulted in a nearly 3.5 times increase in relative incident pulse reproducibility, and a reduction of 10–20 MPa worth of standard deviation for the stress profile, thus increasing the accuracy of properties derived from miniature Kolsky bar experiments. These methods provide strict, quantitative evaluation criteria for data analysis instead of depending on operator interpretation of results, which lowers the barrier of entry for new experimentalists. The finite element model of the system provides a powerful diagnostic tool for system evaluation and design, while also being used for the traditional purpose of material property modeling verification. With a verified finite element model and mesh, the simulation can be relied upon as a verified system that can be used for future experimental design and analysis efforts.
Acknowledgment
We would like to acknowledge support from REL systems for and Dewetron for some hardware and software setups.
Funding Data
Los Alamos National Laboratory (LANL) in partnership with The Pennsylvania State University (Funder ID: 10.13039/100008902).
Data Availability Statement
The authors attest that all data for this study are included in the paper.
Nomenclature
- As =
sample area
- At =
transmitted bar area
- CI =
confidence interval
- Cr =
resistant of calibration resistor
- C0 =
elastic wave speed of the incident bar
- Et =
Young’s modulus transmitted bar
- ls =
sample length
- lstriker =
strike bar length
- s−1 =
strain rate, strain per second
- vstriker =
strike bar velocity, m/s
- Z =
mechanical impedance
- =
time-dependent incident strain pulse
- =
time-dependent reflected strain pulse
- =
time-dependent transmitted strain pulse
- =
density
- =
average sample stress
- =
statistical variance