AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP

Preprints

Explore 40,879 preprints on the Authorea Preprint Repository

A preprint on Authorea can be a complete scientific manuscript submitted to a journal, an essay, a whitepaper, or a blog post. Preprints on Authorea can contain datasets, code, figures, interactive visualizations and computational notebooks.
Read more about preprints.

Science Publishing's Napster Moment and the Coming Youtubes of Science
Josh Nicholson
Alberto Pepe

Josh Nicholson

and 1 more

May 15, 2019
Research is really f**king important.  This statement is almost self-evident by the fact that you're reading this online.  From research has come the web, life-saving vaccines, pasteurization, and countless other advancements. In other words, you can look at cat gifs all day because of research, you're alive because of research, and you can safely add milk to your coffee or tea without contracting some disease, because of research. But how research is done today is being stymied by how it is being communicated.  Most research is locked behind expensive paywalls \cite{Bj_rk_2010}, is not communicated to the public or scientific community until months or years after the experiments are done \cite{trickydoi}, is biased in how it is reported - only "positive" results are typically published \cite{Ahmed_2012}, does not supply the underlying data to major studies \cite{Alsheikh_Ali_2011}, and has been found to be irreproducible at alarming rates \cite{Begley_2012}.Why is science communication so broken?Many would blame the fault of old profit-hungry publishers, like Elsevier, and in many respects, that blame is deserved. However, here's a different hypothesis: what is holding us back from a real shift in the research communication industry is not Elsevier, it's Microsoft Word. Yes, Word, the same application that introduced us to Clippy is the real impediment to effective communication in research.Today, researchers are judged by their publications, both in terms of quantity and prestige.  Accordingly, researchers write up their documents and send them to the most prestigious journals they think they can publish in.  The journals, owned by large multinational corporations, charge researchers to publish their work and then again charge institutions to subscribe to the content. Such subscriptions can run into the many millions of dollars per year per institution \cite{Lawson_2015} with individual access costing $30-60 per article.The system and process for publishing and disseminating research is inimical to scientific advancement and accordingly Open Access and Open Science movements have made big steps towards improving how research is disseminated. Recently, Germany, Peru, and Taiwan have boycotted subscriptions to Elsevier \cite{Schiermeier_2016} and an ongoing boycott to publish or review for certain publishers has accumulated the signatures of 16,493 researchers and counting.  New developments such as Sci-hub, have helped to make research accessible, albeit illegally.  While regarded as a victory by many, the Sci-hub approach is not the solution that researchers are hoping for as it is built on an illegal system of exchanging copyrighted content and bypassing publisher paywalls \cite{Priego}.  The most interesting technologist view of the matter is that the real culprit for keeping science closed isn't actually the oligopoly of publishers \cite{Larivi_re_2015}-- after all, they're for-profit companies trying to run businesses and they're entitled to do any legal thing that helps them deliver value to shareholders. We suggest that a concrete solution for true open access is already out there and it's 100% legal.What is the best solution to truly and legally open access to research?The solution is publishing preprints -- the last version of a paper that belongs to an author before it is submitted to a journal for peer review. Unlike other industries (e.g. literature, music, film, etc.), in research, the preprint version copyright is legally held by the author, even after publication of the work in a journal.Pre-prints are rapidly gaining adoption in the scientific community, with a couple of preprint servers (e.g. arXiv which is run by Cornell University and is primarily for physics papers, and bioRxiv which is similarly for biology papers) receiving thousands of preprints per month.Some of the multinationals are responding with threats against authors not to publish (or post) preprints. However they are being met with fierce opposition from the scientific community, and the tide seems to be turning. Multinationals are now under immense pressure not just from authors in the scientific community, but increasingly from the sources of public and private funding for the actual research. Some organizations are even mandating preprints as a condition of funding. But what is holding back preprints and in general a better way for Authors to have more control of their research?We think the inability for scientists to independently produce and disseminate their work is a major impediment and at the heart of that of that problem is how scientists write. How can Microsoft Word harm scientific communication?Whereas other industries, like the music industry, have been radically transformed and accelerated by providing creators with powerful tools like Youtube, there is no parallel in research.  Researchers are reliant upon publishers to get their ideas out and because of this, they are forced into an antiquated system that has remained largely stagnant since it's inception over 350 years ago.Whereas a minority of researchers in math-heavy disciplines write using typesetting formats like LaTeX, the large majority of researchers (~82%) write their documents in Microsoft Word \cite{brischoux2009don}. Word is easy to use for basic editing but is essentially incompatible with online publishing. Word was created for the personal computer: offline, single-author use. Also, it was not built with scientific research in mind - as such, it lacks support for complex objects like tables and math, data, and code. All in all, Word is extraordinarily feature-poor compared to what we can accomplish today with an online collaborative platform. Because publishers have traditionally accepted manuscripts formatted in Word, and because they consistently fail to truly innovate from a technological standpoint, millions of researchers find themselves using Word. In turn, the research they publish is non-discoverable on the web, data-less, non-actionable, not reusable and, most likely, behind a paywall.  What does the scientific communication ecosystem of the future look like?What is needed is a web-first solution. Research articles should be available on distinct web pages, Wikipedia style. Real data should live underneath the tables and figures. Research needs to finally be machine readable (instead of just tagged with keywords) so that it may be found and processed by search engines and machines. Modern research also deserves to have rich media enhancement -- visualizations, videos, and other forms of rich data in the document itself.All told, researchers need to be able to disseminate their ideas in a web first world, while playing the "Journal game" as long as it exists. Our particular dream (www.authorea.com) is to construct a democratic platform for scientific research -- a vast organizational space for scientists to read and contribute cutting edge science. There is a new class of startups out there doing similar things with the research cycle, and we feel like there is a real and urgent demand for solutions right now in research.
An "Alternative" Science Career
Josh Nicholson

Josh Nicholson

December 03, 2017
I was accepted into the cell biology program at Virginia Tech under conditional terms due to a mediocre undergraduate GPA. This was the deal: maintain good grades and I’d get to continue, or slip up and I was out. As an undergraduate, I spent a lot of time surfing and very little time  cramming for tests -- what can I say? I wasn’t exactly a traditional grad student applicant.Despite my shortcomings on paper, I was ambitious. Before grad school, I contacted a researcher from Harvard who’d proposed through mathematical models that we could kill cancer cells with cancer cells \cite{Deisboeck_2008}.  I told him I wanted to test his proposal experimentally. When he wrote back and I brought the proposal to my potential PI, I quickly realized that incoming grad students don’t actually do this. You’re supposed to go through rotations first, and then select a lab, pick a project that falls within the scope of your PI’s research, and so on. This wasn’t exactly my style.The deeper I got into my PhD, the more I realized the game you have to play in order to be successful: publish in certain journals, publish with the best coauthors you can manage, publish as much as you can. I played the game and published as much as possible within the scope of cell biology and cancer, but also papers within the scope of the scientific communication process itself -- papers on funding, peer review in high-impact journals, and peer review at the NIH. I wrote about cancer but I also wrote about all the problems I was seeing around me in the process itself.I never thought about actually doing anything about these systemic problems until I read The Trouble with Medical Journals \cite{Smith_2006}. The key tenet  -- that peer review misses most major errors -- is the idea that sent me down the path of building a publishing company to take the whole publishing process and flip it in favor of openness. Instead of filtering results and then publishing, I wanted scientists first to publish and then to filter -- to publish and then winnow, so to speak. That’s why the Winnower was born.From Scientist to EntrepreneurI didn’t know anything about starting a business but I knew I needed some money to do it.  I wrote up some ideas for a new publication, entered a business contest on campus, and lost. It was harder than I thought.  But then I sent that proposal to some people I knew from undergrad and through a lot of luck managed to get 50k from a private investor.The Winnower launched in May 2014 and over the course or two years we shifted away from publishing traditional papers to publishing so-called grey literature -- informal documents that traditional publishers ignore.  We published scholarly reddit AMAs, foldscope images, responses to NIH RFIs, journal clubs, and some of the coolest essays I’ve ever read. We formalized blogs and journal clubs so that they could act as reviewers themselves. People liked what we were doing, as judged by the growth of publications and readership. Why shouldn’t reddit AMA’s have DOIs and be given real scientific consideration? I gave talks around the world, raised more money, and met other academics doing similar things with their own companies. I felt lucky and privileged to be doing what I loved, despite the fact I was making less now running a company than I was as a grad student. The End.Okay, the story doesn’t have an ending yet because the story is still ongoing.  Very recently The Winnower was acquired by Authorea, another early company working on the same problem but from a different direction. Authorea, which is also founded by former academics, is fixing how researchers write, collaborate, and share online.  Together we’re working to become the place where researchers can write and publish whatever they want collaboratively and online. It’s an ambitious goal but so too was cancer research.I can’t say if we’ll achieve our goals and I know the road ahead is still daunting but I think the problems we’re working to solve are as hard as some of the most complex problems in science. What is certainly true is that we must work collaboratively to solve them.  I hope this essay inspires more academics to follow their own “crazy” ideas and I hope you’ll stand with our mission to build a more transparent system of research communication.  Let’s get it right.
Step-by-step NMO correction
Leonardo Uieda

Leonardo Uieda

December 07, 2016
Corresponding author: leouieda@gmail.com This is a part of The Leading Edge “Geophysical Tutorials” series. You can read more about it in . Open any textbook about seismic data processing and you will inevitably find a section about the normal moveout (NMO) correction. There you’ll see that we can correct the measured travel-time of a reflected wave t at a given offset x to obtain the travel-time at normal incidence t₀ by applying the following equation t_0^2=t^2-{v_^2} in which vNMO is the NMO velocity. There are variants of this equation with different degrees of accuracy, but we’ll use this one for simplicity. When applied to a common midpoint (CMP) section, the equation above is supposed to turn the hyperbola associated with a reflection into a straight horizontal line. What most textbooks won’t tell you is _how, exactly, do you apply this equation to the data_? Read on and I’ll explain step-by-step how the algorithm for NMO correction from works and how to implement it in Python. The accompanying Jupyter notebook contains the full source code, with documentation and tests for each function. You can download the notebook at github.com/seg or github.com/pinga-lab/nmo-tutorial.
Moonlight Shadow
Matteo Cantiello

Matteo Cantiello

November 30, 2021
"Like every great river and every great sea, the moon belongs to none and belongs to all. It still holds the key to madness, still controls the tides that lap on shores everywhere, still guards the lovers who kiss in every land under no banner but the sky". E.B. White The New Yorker, July 26, 1969Where does the Moon come from?Scientist believe that our Moon formed out of a ‘giant impact’ that occurred between a Mars-sized planet and the early Earth, some 4.5 billion years ago. The Moon was then formed from the coalescence of the orbiting debris scattered during the impact.Recent results seem to confirm this scenario
Experiments testing Bell’s inequality with local real source
Peifeng Wang

Peifeng Wang

May 15, 2019
Aside from Bell’s inequality, entanglement and local real model have other aspects which are expected in experiments. Analysis on a) the physics concept of entanglement and b) precise interpretation of experiments shows that 1) In a reported loophole-free violation of Bell inequality, the transition of wave function from odd parity to even parity reveals that the experiment is performed on the spin of a pair of local real nitrogen vacancy (NV) centre. 2) The equivalence between rotating spin by θ and rotating measurement basis by −θ is not applicable in entanglement case, thus in long range entanglement setups for closing locality loophole, the operation of rotating spin followed by measurement puts the entanglement in question. 3) Fair sampling assumption arises when a finite sample is used to represent the entire population space, thus it is a basic requirement of statistical experiment, fair sampling loophole can not be closed.
Augmented Reality with Hololens: Experiential Architectures Embedded in the Real Wo...
Paul Hockett
Tim Ingleby

Paul Hockett

and 1 more

February 27, 2017
_Additional notes:_ Authors: - Paul Hockett, National Research Council of Canada, 100 Sussex Drive, Ottawa, K1A 0R6, Canada - Tim Ingleby, Department of Architecture and Built Environment, Northumbria University, Ellison Place, Newcastle upon Tyne, NE1 8ST, UK Links: - Online version: Authorea, DOI: 10.22541/au.148821660.05483993 - Repository for videos and files, Figshare, DOI: 10.6084/m9.figshare.c.3470907 - Arxiv version (1610.04281) - Ongoing work: femtolab.ca
BillCorrectly: A software tool to help psychiatrists bill E&M codes appropriately
Kevin J. Black

Kevin J. Black

February 23, 2017
© 2016, Kevin J. Black. This work is licensed under a Creative Commons Attribution 4.0 International License.
Time-resolved multi-mass ion imaging: femtosecond UV-VUV pump-probe spectroscopy wi...
Paul Hockett
Ruaridh Forbes

Paul Hockett

and 8 more

March 23, 2017
_Publication history_ - Original document (Authorea), DOI: 10.22541/au.149030711.19068540. - arXiv 1702.00744 (Feb. 2017) - J. Chem. Phys. special issue “Developments and Applications of Velocity Mapped Imaging Techniques”, (March 2017), DOI: 10.1063/1.4978923 - Data and analysis scripts (OSF), DOI: 10.17605/OSF.IO/RRFK3. _See also_ - AIP Press Release: _The Inner Lives of Molecules_ (April 2017) - PImMS camera website - Vallance group website - Femtolab website
PolyLog_2 of Inverse Elliptic Nome Exponential Generating Function
Benedict Irwin

Benedict Irwin

November 02, 2020
MAIN Let G(q)=_2(m(q)) be an exponential generating function, where Li₂ is the polylogarithm of order 2, _2(z)=^\infty {k^2} and m(q) is the inverse elliptic nome which can be expressed through the Dedakind eta function as m(q)={2})^{8}\eta(2\tau)^{16}}{\eta(\tau)^{24}} where q = eiπτ or by Jacobi theta functions m(q)=\left({\theta_3(0,q)}\right)^4 where \theta_2(0,q)=2^\infty q^{(n+1/2)^2}\\ \theta_3(0,q)=1+2^\infty q^{n^2} giving explicitly G(x)=^\infty {k^2}\left(^\infty x^{(n+1/2)^2}}{1+2^\infty x^{n^2}}\right)^{4k}=^\infty {k!} if we consider the sequence of coefficients ak associated with G(x), modulo 1, or the fractional part of the coefficients, frac(ak) we gain the following sequence 0,0,0,{3},0,{5},0,{7},0,0,0,{11},0,{13},0,0,0,{17},0,{19},0,0,0,{23},0,0,0,0,0,{19},0,{31},0,0,0,0,0,{37},0,0,0,{41},\cdots we see the primes in the denominator in positions where the power of x is a prime. We also note that so far, the numerators are always less than the denominator (obviously), but count, succesively upwards, producing monotonically increasing subsequences. The prime only parts continue {3},{5},{7},{11},{13},{17},{19},{23},{29},{31},{37},{41},{43},{47},{53},{59},{61},{67},{71},{73},{79},{83},{89},{97}, After closer inspection, we see the numerators from the point 1, 3, 7, 13, 15, 21, 25, 27, 31, 37, 43, 45, 51, 55, 57, ... take the form prime(k)−16, the numerators before this take the form 2 ⋅ prime(k)−16, for 6, 10, 3 ⋅ prime(k)−16 for 5, 4 ⋅ prime(k)−16 for 4 and 6 ⋅ prime(k)−16 for the first numerator 2. It is likely then that for the rest of the numbers this pattern continues. This then gives for the coefficient ak of G(x), with k > 6, (a_k)={k}, \; k\in We find that if we take the original coefficients ak, and subtract this fractional part in general \delta_k=a_k-{k} for numbers m which cannot be written as a sum of at least three consecutive positive integers, δm is an integer (empirical). A111774 “ Numbers that can be written as a sum of at least three consecutive positive integers.” apart from odd primes, numbers which cannot are powers of two. OTHER We find a similar relationship with G_2(x)=_2\left({(1-x)^2\left(1-{x-1}\right)^2}\right)=^\infty {k!} where bk seem to follow for k > 2 (b_k)={k}, \; k\in GENERATING FUNCTION FOR FRACTIONAL PART We see the Generating function for n/2 is {2(x-1)^2} but the generating function for the fractional part of n/2, which is (n mod2)/2, is given by {2(x^2-1)} the property described is associated with the polylog, and we seen that the fractional part of _2(2x)=^\infty {k!} gives (c_k)={k}, \; k\in\\ 0, \; this means \left({k^2}\right) = {k}, \; k\in\\ 0,\; or \left({k}\right)= {k}, \; k\in\\ 0,\; we also see that \left({k}\right)= {k}, \; k\in\\ {2}, 4\\ 0,\;
ePSproc: Post-processing suite for ePolyScat electron-molecule scattering calculati...
Paul Hockett

Paul Hockett

September 03, 2019
_Article details_ Software metapaper, structured for the Journal of Open Research Software (JORS). Online version (Authorea) can be found at https://www.authorea.com/users/71114/articles/122402/_show_article Github (software repository): github.com/phockett/ePSproc Figshare repository (manuscript & source files): DOI: 10.6084/m9.figshare.3545639 10/08/16 - This fork for review. 12/11/16 - arXiv version uploaded, 1611.04043 03/09/19 - Finally working on a python version, see Github pages for updates. Full documentation now on on Read the Docs.
Suggestions for new NIH grant applicants
Kevin J. Black

Kevin J. Black

February 13, 2023
© 2016-2020, Kevin J. Black. This work is licensed under a Creative Commons Attribution 4.0 International License.
Several Proofs of Security for a Tokenization Algorithm
Riccardo Longo
Riccardo Aragona

Riccardo Longo

and 2 more

March 28, 2017
In this paper we propose a tokenization algorithm of Reversible Hybrid type, as defined in PCI DSS guidelines for designing a tokenization solution, based on a block cipher with a secret key and (possibly public) additional input. We provide some formal proofs of security for it, which imply our algorithm satisfies the most significant security requirements described in PCI DSS tokenization guidelines. Finally, we give an instantiation with concrete cryptographic primitives and fixed length of the PAN, and we analyze its efficiency and security.
How To Write Mathematical Equations, Expressions, and Symbols with LaTeX: A cheatshee...
Authorea Help
Matteo Cantiello

Authorea Help

and 3 more

May 15, 2019
WHAT IS LATEX? LaTeX is a programming language that can be used for writing and typesetting documents. It is especially useful to write mathematical notation such as equations and formulae. HOW TO USE LATEX TO WRITE MATHEMATICAL NOTATION There are three ways to enter “math mode” and present a mathematical expression in LaTeX: 1. _inline_ (in the middle of a text line) 2. as an _equation_, on a separate dedicated line 3. as a full-sized inline expression (_displaystyle_) _inline_ Inline expressions occur in the middle of a sentence. To produce an inline expression, place the math expression between dollar signs ($). For example, typing $E=mc^2$ yields E = mc². _equation_ Equations are mathematical expressions that are given their own line and are centered on the page. These are usually used for important equations that deserve to be showcased on their own line or for large equations that cannot fit inline. To produce an inline expression, place the mathematical expression between the symbols \[! and \verb!\]. Typing \[x=}{2a}\] yields \[x=}{2a}\] _displaystyle_ To get full-sized inline mathematical expressions use \displaystyle. Typing I want this $\displaystyle ^{\infty} {n}$, not this $^{\infty} {n}$. yields: I want this $\displaystyle ^{\infty}{n}$, not this $^{\infty}{n}.$ SYMBOLS (IN _MATH_ MODE) The basics As discussed above math mode in LaTeX happens inside the dollar signs ($...$), inside the square brackets \[...\] and inside equation and displaystyle environments. Here’s a cheatsheet showing what is possible in a math environment: -------------------------- ----------------- --------------- _description_ _command_ _output_ addition + + subtraction - − plus or minus \pm ± multiplication (times) \times × multiplication (dot) \cdot ⋅ division symbol \div ÷ division (slash) / / simple text text infinity \infty ∞ dots 1,2,3,\ldots 1, 2, 3, … dots 1+2+3+\cdots 1 + 2 + 3 + ⋯ fraction {b} ${b}$ square root $$ nth root \sqrt[n]{x} $\sqrt[n]{x}$ exponentiation a^b ab subscript a_b ab absolute value |x| |x| natural log \ln(x) ln(x) logarithms b logab exponential function e^x=\exp(x) ex = exp(x) deg \deg(f) deg(f) degree \degree $\degree$ arcmin ^\prime ′ arcsec ^{\prime\prime} ′′ circle plus \oplus ⊕ circle times \otimes ⊗ equal = = not equal \ne ≠ less than < < less than or equal to \le ≤ greater than or equal to \ge ≥ approximately equal to \approx ≈ -------------------------- ----------------- ---------------
T-SNE visualization of large-scale neural recordings
George Dimitriadis
Awaiting Activation

George Dimitriadis

and 2 more

April 25, 2016
Electrophysiology is entering the era of ‘Big Data’. Multiple probes, each with hundreds to thousands of individual electrodes, are now capable of simultaneously recording from many brain regions. The major challenge confronting these new technologies is transforming the raw data into physiologically meaningful signals, i.e. single unit spikes. Sorting the spike events of individual neurons from a spatiotemporally dense sampling of the extracellular electric field is a problem that has attracted much attention , but is still far from solved. Current methods still rely on human input and thus become unfeasible as the size of the data sets grow exponentially. Here we introduce the t-student stochastic neighbor embedding (t-sne) dimensionality reduction method as a visualization tool in the spike sorting process. T-sne embeds the n-dimensional extracellular spikes (n = number of features by which each spike is decomposed) into a low (usually two) dimensional space. We show that such embeddings, even starting from different feature spaces, form obvious clusters of spikes that can be easily visualized and manually delineated with a high degree of precision. We propose that these clusters represent single units and test this assertion by applying our algorithm on labeled data sets both from hybrid and paired juxtacellular/extracellular recordings . We have released a graphical user interface (gui) written in python as a tool for the manual clustering of the t-sne embedded spikes and as a tool for an informed overview and fast manual curation of results from other clustering algorithms. Furthermore, the generated visualizations offer evidence in favor of the use of probes with higher density and smaller electrodes. They also graphically demonstrate the diverse nature of the sorting problem when spikes are recorded with different methods and arise from regions with different background spiking statistics.
Predicting Peptide-MHC Binding Affinities With Imputed Training Data
Alex Rubinsteyn
Timothy O'Donnell

Alex Rubinsteyn

and 3 more

April 19, 2016
Predicting the binding affinity between MHC proteins and their peptide ligands is a key problem in computational immunology. State of the art performance is currently achieved by the allele-specific predictor NetMHC and the pan-allele predictor NetMHCpan, both of which are ensembles of shallow neural networks. We explore an intermediate between allele-specific and pan-allele prediction: training allele-specific predictors with synthetic samples generated by imputation of the peptide-MHC affinity matrix. We find that the imputation strategy is useful on alleles with very little training data. We have implemented our predictor as an open-source software package called MHCflurry and show that MHCflurry achieves competitive performance to NetMHC and NetMHCpan.
How many scholarly articles are written in LaTeX?      
Alberto Pepe

Alberto Pepe

February 21, 2017
How many people use the typesetting language LaTeX? This is obviously a hard question. However, another way to look at it is to calculate the percentage of published scholarly articles written in LaTeX.
Agnostic cosmology in the CAMEL framework
plaszczy

plaszczy

March 07, 2016
Cosmological parameter estimation is traditionally performed in the Bayesian context. By adopting an “agnostic” statistical point of view, we show the interest of confronting the Bayesian results to a frequentist approach based on profile-likelihoods. To this purpose, we have developed the _Cosmological Analysis with a Minuit Exploration of the Likelihood_ () software. Written from scratch in pure C++, emphasis was put in building a clean and carefully-designed project where new data and/or cosmological computations can be easily included. CAMEL incorporates the latest cosmological likelihoods and gives access _from the very same input file_ to several estimation methods: - A high quality Maximum Likelihood Estimate (a.k.a “best fit”) using , - profile likelihoods, - a new implementation of an Adaptive Metropolis MCMC algorithm that relieves the burden of reconstructing the proposal distribution. We present here those various statistical techniques and roll out a full use-case that can then used as a tutorial. We revisit the  parameters determination with the latest  data and give results with both methodologies. Furthermore, by comparing the Bayesian and frequentist approaches, we discuss a “likelihood volume effect” that affects the optical reionization depth when analyzing the high multipoles part of the  data. The software, used in several  data analyzes, is available from http://camel.in2p3.fr. Using it does not require advanced C++ skills.
Cataloguing Molecular Cloud Populations in Galaxy M100
Natalie Hervieux
Erik Rosolowsky

Natalie Hervieux

and 1 more

March 01, 2016
We compare the properties of giant molecular associations in the galaxy Messier 100 (M100) with those of the less massive giant molecular clouds in the Milky Way and Local Group, while also observing how those properties change within M100 itself. From this analysis of cloud mass, radius, and velocity dispersion, we determine that the clouds are in or near virial equilibrium and that their properties are consistent with the underlying trends for the Milky Way. We find differences between nuclear, arm and inter-arm M100 populations, such as the nuclear clouds being the most massive and turbulent, and arm and inter-arm populations having differently shaped mass distributions from one another. Through the analysis of velocity gradients, cloud motion can be attributed to turbulence rather than large scale shearing motion. This is supported by our comparison with turbulence regulated star formation models. Finally, we calculate ISM depletion times to see how quickly clouds turn gas into stars and found that clouds form stars more efficiently if they are turbulent or dense.
El Niño Composites
Tristan Hauser

Tristan Hauser

May 21, 2020
A lot of attention has been given to the consequences of the latest strong El Niño event. People often talk about meteorological phenomena as El Niño (or La Niña) conditions, but what are these, and how do we come about our notions of what is a ’typical’ El Niño event? How consistent do we expect the effects of this phenomena to be, especially when these ’signature effects’ occur thousands of kilometers away from the Pacific Ocean? Often understanding about the typical effects of large scale climate variations are derived from _composites_. This is a common statistical method where elements are classified into groups based on some external consideration, and then the properties of each group is expressed by the average of all the elements it contains. This can be a very efficient way to visualize large data sets, but it can also imply more consistency within groups than is actually the case. This post goes over some of mechanics of creating composites, and ways to explore to what degree they can be taken at ’face value’.
The Surfer's Guide to Gravitational Waves
Matteo Cantiello

Matteo Cantiello

February 20, 2017
IN A NUTSHELL: Gravitational waves are ripples in the fabric of space time produced by violent events, like merging together two black holes or the explosion of a massive star. Unlike light (electromagnetic waves) gravitational waves are not absorbed or altered by intervening material, so they are very clean proxies of the physical process that produced them. They are expected to travel at the speed of light and, if detected, they could give precious information about the cataclysmic processes that originated them and the very nature of gravity. That’s why the direct detection of gravitational waves is such an important endeavor. Definitely worthy of a Nobel prize in physics.
Tourette syndrome research highlights from 2016
Kevin J. Black

Kevin J. Black

August 02, 2017
This article presents highlights chosen from research that appeared during 2016 on Tourette syndrome and other tic disorders. Selected articles felt to represent meaningful advances in the field are briefly summarized.
Generation of Shear Waves by Laser in Soft Media in the Ablative and Thermoelastic Re...
Pol Grasland-Mongrain
Yuankang Lu

Pol Grasland-Mongrain

and 1 more

January 07, 2016
This article describes the generation of elastic shear waves in a soft medium using a laser beam. Our experiments show two different regimes depending on laser energy. Physical modeling of the underlying phenomena reveals a thermoelastic regime caused by a local dilatation resulting from temperature increase, and an ablative regime caused by a partial vaporization of the medium by the laser. Computed theoretical displacements are close to experimental measurements. A numerical study based on the physical modeling gives propagation patterns comparable to those generated experimentally. These results provide a physical basis for the feasibility of a shear wave elastography technique (a technique which measures a soft solid stiffness from shear wave propagation) by using a laser beam.
Pharmit: Interactive Exploration of Chemical Space
Jocelyn Sunseri
David Koes

Jocelyn Sunseri

and 1 more

January 07, 2016
Pharmit (http://pharmit.csb.pitt.edu) provides an online, interactive environment for the virtual screening of large compound databases using pharmacophores, molecular shape, and energy minimization. Users can import, create, and edit virtual screening queries in an interactive browser-based interface. Queries are specified in terms of a pharmacophore, a spatial arrangement of the essential features of an interaction, and molecular shape. Search results can be further ranked and filtered using energy minimization. In addition to a number of pre-built databases of popular compound libraries, users may submit their own compound libraries for screening. Pharmit uses state-of-the-art sub-linear algorithms to provide interactive screening of millions of compounds. Queries typically take a few seconds to a few minutes depending on their complexity. This allows users to iteratively refine their search during a single session. The easy access to large chemical datasets provided by Pharmit simplifies and accelerates structure-based drug design. Pharmit is available under a dual BSD/GPL open-source license.
Thermodynamics of the magnetocaloric effect in the swept field and stepped field meas...
Yasu Takano
Nathanael A. Fortune

Yasu Takano

and 1 more

January 29, 2021
ENERGY CONSERVATION IN SWEPT FIELD LIMIT For a calorimeter sample (plus addenda) weakly thermally linked to a temperature controlled reservoir, energy conservation implies -T dS = \kappa \Delta T dt + C_{} dT where κ is the sample to reservoir thermal conductance and addenda is the heat capacity of the actual addenda (such as the thermometer, heater, and glue or grease binding the sample to the sensors) plus the heat capacity of the sample lattice (due to phonons). The left hand side term is the heat released by the system — which in the case of a spin system, for example, would be the heat released by the spins — when the field is changed by dH. The minus sign indicates that system entropy decreases as heat is released. Most of the released heat flows to the reservoir but some fraction heats up the addenda (to the same temperature as the system). The first term on the right hand side describes heat flow to the reservoir. The second term describes the temperature rise of the addenda. In a non-adiabatic relaxation-time or ac-calorimeter like that used in our swept-field measurements , the first term dominates. In contrast, in an adiabatic measurement, the first term is negligible.
← Previous 1 2 … 1696 1697 1698 1699 1700 1701 1702 1703 1704 Next →

| Powered by Authorea.com

  • Home