AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP

Preprints

Explore 40,879 preprints on the Authorea Preprint Repository

A preprint on Authorea can be a complete scientific manuscript submitted to a journal, an essay, a whitepaper, or a blog post. Preprints on Authorea can contain datasets, code, figures, interactive visualizations and computational notebooks.
Read more about preprints.

Unbelievable Power: The Physics of Nuclear Blast Waves
Matteo Cantiello

Matteo Cantiello

September 26, 2017
The power of the atomAt the beginning of the 20th century, major advancements in our understanding of fundamental physics led scientists to the discovery of nuclear energy. An unprecedented amount of power could in principle be released by combining (nuclear fusion) or breaking (nuclear fission) certain atomic species under special conditions. Nuclear fusion in particular was understood to be the process powering the immense luminosity of stars, including our Sun. Nuclear fusion is the energy source illuminating our Universe.Why so much energy?Burning fossil fuels releases chemical energy. This chemical energy is stored in the mild electromagnetic interactions between atoms in a compound. Nuclear energy, on the other hand, comes from the very central regions of the atom.  As the name suggests, it is stored in the nuclei, which are kept together by the strong force. The strong force is much stronger than all the other forces, including the electromagnetic one. As a result, nuclear fuel has an energy density about ten million times larger than chemical fuel. If your car was running on nuclear fuel, its gas mileage would be something like hundreds of millions of MPG. From light to darknessThe physics revolution that characterized the first three decades of the 20th century and led to the development of quantum mechanics and nuclear physics, was followed by the second World War. In 1942, the United States started a very ambitious project to build a nuclear weapon. The Manhattan Project, led by Robert Oppenheimer and gathering some of the best physicists on the planet, culminated with the successful Trinity experiment in 1945 (Fig.\ref{982837}). The first detonation of a nuclear weapon was the most shocking demonstration of the great power of science and the scientific method. Only less than a month later, two nuclear bombs were dropped over the Japanese cities of Hiroshima and Nagasaki, resulting in the end of WWII and the death of hundreds of thousands of people. The sheer destruction inflicted by the atomic bomb left an indelible mark on humankind's consciousness, formally starting a new era in the history of man. An era of greater responsibility. While no nuclear weapons have been purposely used in war ever since, more than 2000 nuclear tests have been performed after the Trinity, Hiroshima and Nagasaki explosions.  
Software for web-based tic suppression training
Jonathan Black
Kevin J. Black

Jonathan Black

and 1 more

December 08, 2017
Exposure and response prevention (ERP) is a first-line behavior therapy for obsessive-compulsive disorder, and has also been tested in Tourette syndrome (TS). However, ERP for tic disorders requires intentional tic suppression, which for some patients is difficult even for brief periods. Additionally, practical access to behavior therapy is difficult for many patients, especially those in rural areas. The authors present a simple, working web platform (TicTrainer) that implements a strategy called reward-enhanced exposure and response prevention (RE–ERP). This strategy sacrifices most expert therapist components of ERP, focusing only on increasing the duration of time for which the user can suppress tics through automated differential reinforcement of tic-free periods (DRO). RE–ERP requires an external tic monitor, such as a parent, during training sessions. The user sees increasing digital rewards for longer and longer periods of successful tic suppression, similar to a video game score. TicTrainer is designed for security, storing no personally identifiable health information, and has features to facilitate research, including optional masked comparison of tics during DRO _vs._ noncontingent reward conditions.
TicTimer software for measuring tic suppression
Jonathan Black
Jonathan M. Koller

Jonathan Black

and 2 more

August 01, 2017
Woods and Himle developed a standardized tic suppression paradigm (TSP) for the experimental setting to quantify the effects of intentional tic suppression in Tourette syndrome. The present article describes a Java program that automates record keeping and reward dispensing during the several experimental conditions of the TSP. The software optionally can be connected to a commercial reward token dispenser to further automate reward delivery to the participant. The timing of all tics, 10-second tic-free intervals, and dispensed rewards is recorded in plain text files for later analysis. Expected applications include research on Tourette syndrome and related disorders.
The arXiv of the future will not look like the arXiv
Alberto Pepe
Matteo Cantiello

Alberto Pepe

and 2 more

June 08, 2017
The arXiv is the most popular preprint repository in the world. Since its inception in 1991, the arXiv has allowed researchers to freely share publication-ready articles prior to formal peer review. The growth and the popularity of the arXiv emerged as a result of  new technologies that made document creation and dissemination easy, and cultural practices where collaboration and data sharing were dominant. The arXiv represents a unique place in the history of research communication and the Web itself, however it has arguably changed very little since it's creation.  Here we look at the strengths and weaknesses of arXiv in an effort to identify what possible improvements can be made based on new technologies not previously available. Based on this, we argue that a modern arXiv might in fact not look at all like the arXiv of today.IntroductionThe arXiv, pronounced "archive", is the most popular preprint repository in the world.  Started in 1991 by physicist Paul Ginsparg, the arXiv allows researchers to freely share post publication-ready articles prior to formal peer review and publication. Today, the arXiv publishes over 10,000 articles each month from high-energy physics, computer science, quantitative biology, statistics, quantitative finance, and others (see Fig \ref{104668}). The early success of arXiv stems from the introduction of new technological advances paired to a well-developed culture of collaboration and sharing. Indeed, before the arXiv even existed, physicists were already physically sharing recently finished manuscripts via mail, first, and by email, later.  To understand the success of the arXiv it is important to understand the history of the arXiv. Below we highlight a brief history of technology, services, and cultural norms that predate the arXiv and were integral to its early and continued success.  The history of the arXivPrior to the arXiv, preprinting was performed by institutional repositories, such as the SPIRES-HEP database (Stanford Physics Information REtrieval System- High Energy Physics) at the Stanford Linear Accelerator Center (SLAC) and the Document Server at CERN. Developed in the early 70's, SPIRES created a bibliographic standard and centralized resource that allowed researchers across universities in high energy physics to email the database and request a list of preprints be sent to them.  Since papers themselves could not be emailed at the time, the system relied on traditional mail. This resource was immediately successful with requests numbering in the thousands within the first few years \cite{Elizalde_2017}.  While SPIRES greatly improved the flow of information, it still took weeks for articles (preprints) to be sent and received. A new typesetting system would soon emerge and change this.TeX, pronounced "tech", was developed by Donald Knuth in the late 70's as a way for researchers to write and typeset articles programmatically. Soon after the introduction of TeX, Leslie Lamport set a standard for TeX formatting, called LaTeX, which made it very easy for all researchers to professionally typeset their documents on their own.  This system made sharing papers easier and cheaper than ever before. Indeed, many, if not most, researchers at the time relied upon secretaries or typists to write their work, which then had to be photocopied in order to be sent via mail to a handful of other researchers. Tex allowed researchers to write their documents in a specified manner (binary) that could be emailed and then downloaded and compiled without the need for physical mail. Soon, physicists were emailing and downloading .tex files at great rates hastening the process of research communication like never before.Such a system immediately created a new problem for researchers: information overload. Researchers were exchanging emails containing preprints at great rates, and given the size of computer hard drives at the time, email servers were running out of space \cite{Ginsparg_2011}.  To address this problem, an automated email server, called arXiv, was set up in the early 90's. The arXiv would allow researchers to automatically request preprints via email as needed. It would soon become one of the world's first web servers and today still serves as one of the most open and efficient forms of research communication in the world.  The arXiv was a leader in introducing and utilizing new technology when it was launched, however it has arguably changed very little since its inception, despite a wealth of new technologies now available. Here we look at the strengths and weaknesses of the arXiv in an effort to identify what possible improvements can be made based on new technologies and tools and propose that a modern arXiv might in fact not look at all like the arXiv of today --- a development that will likely occur with or without arXiv.
Experiments testing Bell's inequality with local real source
Peifeng Wang

Peifeng Wang

May 02, 2019
Aside from Bell’s inequality, QM and local real theory have other specifications that can be observed in experiments. To explore these specifications, we re-examine EPR paradox to show that non-locality arises from the absence of location variable. Our analysis are then applied to several reported experiments. 1) In a known short range Bell experiment with high detection efficiency, portion of the presented data agrees more with local real model than with QM. 2) The so called non maximally entangled state in several experiments are essentially partially entangled photons, with a large local real part helping the violation of Bell’s inequality, and the reported event counts deviate from expected entanglement model. 3) In long range EPR experiments for closing locality loophole, interactions with local real apparatus prior to measurements put the entanglement in question.
On the Number of k-Crossing Partitions
Benedict Irwin

Benedict Irwin

May 05, 2021
ABSTRACT I introduce k-crossing paths and partitions and count the number of paths for each number of desired crossings k for systems with 11 points or less. I give some conjectures into the number of possible paths for certain numbers of crossings as a function of the number of points. INTRODUCTION A order n meandric partition is a set of the integers 1⋯n, such that a path from the south-west can weave through n points labeled 1⋯n without intersecting itself and finally heads east (examples are shown in Fig. 1). Counting the number of possible paths for n points is a tricky problem, and no recursion relation, generating function or explicit formula for the number of order n meandric partitions appears to have been found. This work is concerned with the number of paths that must intersect themselves exactly k times, where when k is 0, we have the meandric paths. It is possible to draw a line that deliberately crosses itself as many times as required, because of this we only consider a path to be k-crossing if k is the smallest number of crossings possible, that is a path that must cross itself k times (an example of a 3-crossing path over 9 points is given in Fig. 2). RESULTS Define ak(n) to be the number of configurations of n points where the path through them is forced to cross itself k times. For 0-crossings on n points we have the open meandric numbers, given in the OEIS as A005316 a_0(n) = 1, 1, 1, 2, 3, 8, 14, 42, 81, 262, 538, 1828, 3926, \cdots, \;\; n=0,1,\cdots this work has counted this for k > 0 by calculating all n! permutations of the n integers and checking to see the minimal number of crossings for each, we then have n =&0&1&2&3&4&5&6&7&8&9&10&11\cdots\\ a_0(n) =&1,& 1,& 1,& 2,& 3,& 8,& 14,& 42,& 81,& 262,& 538,& 1828,\cdots\\ a_1(n) =&0,&0,& 1,& 4,& 10,& 36,& 85,& 312,& 737,& 2760,& 6604, &25176,\cdots\\ a_2(n) =&0,&0,& 0,& 0,& 8,& 42,& 168,& 760,& 2418,& 10490,& 30842, &131676,\cdots\\ a_3(n) =&0,&0,& 0,& 0,& 2,& 16,& 164,& 944,& 4386,& 22240,& 83066, &398132,\cdots\\ a_4(n) =&0,&0,& 0,& 0,& 1,& 18,& 146,& 1076,& 6255,& 37250,& 168645, &908898,\cdots\\ a_5(n) =&0,&0,& 0,& 0,& 0,& 0,& 96,& 960,& 7388,& 51968,& 282122, &1711824, \cdots\\ a_6(n) =&0,&0,& 0,& 0,& 0,& 0,& 30,& 440,& 6472,& 55140,& 384065, &2642444,\cdots\\ a_7(n) =&0,&0,& 0,& 0,& 0,& 0,& 14,& 368,& 5176,& 53920,& 455944, &3575040,\cdots\\ a_8(n) =&0,&0,& 0,& 0,& 0,& 0,& 2,& 66,& 3542,& 45960,& 484058, &4336734,\cdots\\ a_9(n) =&0,&0,& 0,& 0,& 0,& 0,& 1,& 72,& 2011,& 32280,& 452504, &4661756,\cdots\\ a_{10}(n) =&0,&0,& 0,& 0,& 0,& 0,& 0,& 0,& 1172,& 25066,& 396493, &4709856,\cdots\\ a_{11}(n) =&0,&0,& 0,& 0,& 0,& 0,& 0,& 0,& 420,& 11840,& 309696, &4291440,\cdots\\ a_{12}(n) =&0,&0,& 0,& 0,& 0,& 0,& 0,& 0,& 201,& 8930,& 225754, &3661348,\cdots\\ a_{13}(n) =&0,&0,& 0,& 0,& 0,& 0,& 0,& 0,& 40,& 2240,& 151849, &2947392,\cdots\\ a_{14}(n) =&0,&0,& 0,& 0,& 0,& 0,& 0,& 0,& 18,& 2040,& 91147, &2103648,\cdots\\ a_{15}(n) =&0,&0,& 0,& 0,& 0,& 0,& 0,& 0,& 2,& 224,& 55030, &1575744,\cdots\\ a_{16}(n) =&0,&0,& 0,& 0,& 0,& 0,& 0,& 0,& 1,& 270,& 26762, &915924,\cdots\\ a_{17}(n) =&0,&0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 14627, &665088,\cdots\\ a_{18}(n) =&0,&0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 5405, &295956,\cdots\\ a_{19}(n) =&0,&0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 2642, &218508,\cdots\\ a_{20}(n) =&0,&0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 641, &63522,\cdots\\ a_{21}(n) =&0,&0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 293, &54672,\cdots\\ a_{22}(n) =&0,&0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 48, &8964,\cdots\\ a_{23}(n) =&0,&0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 22, &9552,\cdots\\ a_{24}(n) =&0,&0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 2, &706,\cdots\\ a_{25}(n) =&0,&0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 0,& 1, &972,\cdots where the vertical sum over columns of terms gives n!. CONJECTURES The above information has lead to a few conjectures. a_{n^2}(2n) = 1 this can be converted to words as, there is exactly one path through 2n points that crosses n² times. The partitions associated with these paths are (2,1)\\ (3,1,4,2)\\ (4,1,5,2,6,3)\\ (5,1,6,2,7,3,8,4)\\ (6,1,7,2,8,3,9,4,10,5) and a clear interlaced pattern can be seen (an example is given in Fig. 3). a_{n^2-1}(2n) = 2, \; n>1 a_{n^2-2}(2n) = 4n+2, \; n>2 a_{n^2-3}(2n) = 8n+8, \; n>3 a_{n^2}(2n+1) = 2(n+1)3^{n-1}, \; n>1
-1 has Clear Semantics? Hold my Beer.
Deyan Ginev

Deyan Ginev

May 01, 2017
This is a story of _semantics-by-convention_ gone wrong that hit us at Authorea last week.
Electrochemical Roughening and Carbon Nanotube Coating of Tetrodes for Chronic Single...
Zifeng Xia
Gonzalo Arias

Zifeng Xia

and 11 more

August 18, 2019
ABSTRACT Stable recordings are a precondition to understanding the fundamental role of long-term brain processes involved in neural plasticity, learning, pathogenesis, and aging. Despite recent advances in materials engineering, digital signal acquisition, and analysis algorithms, stable recording from isolated neurons over longer periods of time remains a challenge. In this study, we combined advances in material chemistry and surgical technique to develop a "Magdeburger" multi-tetrode array that enables parallel recording of multiple single-neurons with long-term signal stability and high signal-to-noise ratio at a reasonable cost. Flexible platinum-iridium tetrodes were electrochemically roughened and coated with carbon nanotubes, thereby decreasing electrode impedance and increasing charge transfer. Packaging of multi-tetrode arrays, tetrode rigidity, and insertion techniques were optimized to minimize tetrode tip movement and to allow simultaneous recordings from independently targeted brain regions even at greater depths in both rodents and primates. Together, the Magdeburger probe provides a basis for a wide range of experimental and translational approaches that require long-term-stable and simultaneous high-quality recordings across different structures throughout the mammalian brain. Areas of potential application include cognitive learning and memory, aging, pathogenesis, neural correlates for behavioral performance, and the development of neuronal brain-computer interfaces for humans.
Smoke gets in your tics
Kevin J. Black

Kevin J. Black

April 25, 2017
Many (though not all) of my patients who have tried marijuana have felt that their tics improved after using it. Such self-treatment is not rare (poster P94 here), and other doctors report similar results (see for example poster P6 here). Pharmacological benefits from cannabis products are plausible, since cannabinoid receptors in the brain's basal ganglia are well positioned to affect movement . Of course, in addition to any real benefit from marijuana, there could be expectation effects, or one could simply care less about tics when high. Random allocation clinical trials with blind rating of benefit (RCTs) are essential to demonstrating whether marijuana has any true benefit for tics. Müller-Vahl and colleagues carried out two RCTs about 15 years ago in Tourette syndrome (TS) using THC (tetrahydrocannabinol), the main intoxicating ingredient in cannabis . Both trials showed benefit, but the trials were relatively small. Two to 3 years ago, the Tourette Association of America funded two pilot studies in this field, but results have not yet been reported. One trial, at Yale, was to study the FAAH (fatty acid amide hydrolase) inhibitor PF-04457845 in TS , but the trial was placed on clinical hold pending results from a different trial. Investigators at Toronto Western Hospital were funded for a trial in TS of medical cannabis products with varying concentrations of THC and cannabidiol . Cannabidiol is being studied in several brain disorders, including epilepsy, with hopes that it may provide benefit without the psychological side effects of THC. Not surprisingly, the paucity of data has led to different viewpoints. Müller-Vahl has argued that THC may be appropriate in some TS patients , whereas an American Academy of Neurology review and a Cochrane-style review in JAMA concluded that the evidence was insufficient to recommend THC for tic disorders . The clinical utility of cannabinoids in TS was one of two clinical controversies debated at the 2015 First World Congress on Tourette Syndrome and Tic Disorders .
On Algorithms, ‘Big Data’ and the Future of Psychometrics
Kenneth Royal, PhD

Kenneth Royal, PhD

April 17, 2017
 Kenneth D. Royal and Melanie Lybarger The topic of automation replacing human jobs has been receiving a great deal of media attention in recent months. In January, the McKinsey Global Institute (Manyika et al., 2017) published a report stating 51% of job tasks (not jobs) could be automated with current technologies. The topic of ‘big data’ and algorithms was also briefly discussed on the Rasch listserv last year and offered a great deal of food-for-thought regarding the future of psychometrics in particular. Several individuals noted a number of automated scoring procedures are being developed and fine-tuned, and each offer a great deal of promise. Multiple commenters noted the potential benefits of machine scoring using sophisticated algorithms, such as power, precision, and reliability. Some comments even predicted humans will become mostly obsolete in the future of psychometrics. Certainly, there is much to get excited about when thinking about the possibilities. However, there remain some issues that should encourage us to proceed with extreme caution. The Good For many years now algorithms have played a significant role in our everyday lives. For example, if you visit an online retailer’s website and click to view a product, you will likely be presented a number of recommendations for related products based on your presumed interests. In fact, years ago Amazon employed a number of individuals whose job was to critique books and provide recommendations to customers. Upon developing an algorithm that analyzed data about what customers had purchased, sales increased dramatically. Although some humans were (unfortunately) replaced with computers, the ‘good’ was that sales skyrocketed for both the immediate and foreseeable long-term future and the company was able to employ many more people. Similarly, many dating websites now use information about their subscribers to predict matches that are likely to be compatible. In some respects, this alleviates the need for friends and acquaintances to make what are often times awkward introductions between two parties, and feel guilty if the recommendation turns out to be a bad one. The ‘good’, in this case, is the ability to relieve people that have to maintain relationships with each party of the uncomfortable responsibility of playing matchmaker. While the aforementioned algorithms are generally innocuous, there are a number of examples that futurists predict will change most everything about our lives. For example, in recent years Google’s self-driving cars have gained considerable attention. Futurists imagine a world in which computerized cars will completely replace the need for humans to know how to drive. These cars will be better drivers than humans - they will have better reflexes, enjoy greater awareness of other vehicles, and will operate distraction-free (Marcus, 2012). Further, these cars will be able to drive closer together, at faster speeds, and will even be able to drop you off at work while they park themselves. Certainly, there is much to look forward to when things go as planned, but there is much to fear when things do not. The Bad Some examples of algorithmic failures are easy to measure in terms of costs. In 2010, the ‘flash crash’ occurred when an algorithmic failure from a firm in Kansas who ordered a single mass sell and triggered a series of events that led the Dow Jones Industrial Average into a tailspin. Within minutes, nearly $9 trillion in shareholder value was lost (Baumann, 2013). Although the stocks later rebounded that day, it was not without enormous anxiety, fear and confusion. Another example involving economics also incorporates psychosocial elements. Several years ago, individuals (from numerous countries) won lawsuits against Google when the autocomplete feature linked libelous and unflattering information to them when their names were entered into the Google search engine. Lawyers representing Google stated "We believe that Google should not be held liable for terms that appear in autocomplete as these are predicted by computer algorithms based on searches from previous users, not by Google itself." (Solomon, 2011). Courts, however, sided with the plaintiffs and required Google to manually change the search suggestions. Another example involves measures that are more abstract, and often undetectable for long periods of time. Consider ‘aggregator’ websites that collect content from other sources and reproduces it for further proliferation. News media sites are some of the most common examples of aggregators. The problem is media organizations have long been criticized with allegations of bias. Cass Sunstein, Director of the Harvard Law School's program on Behavioral Economics and Public Policy, has long discussed the problems of ‘echo chambers’, a phenomenon that occurs when people consume only the information that reinforces their views (2009). This typically results in extreme views, and when like-minded people get together, they tend to exhibit extreme behaviors. The present political landscapes in the United States (e.g., democrats vs. republicans) and Great Britain (e.g., “Brexit” - Britain leaving the European Union) highlight some of the consequences that result from echo chambers. Although algorithms may not be directly responsible for divisive political views throughout the U.S. (and beyond), their mass proliferation of biased information and perspectives certainly contributes to group polarization that may ultimately leave members of a society at odds with one another. Some might argue these costs are among the most significant of all. The Scary Gary Marcus, a professor of cognitive science at NYU, has published a number of pieces in The New Yorker discussing what the future may potentially hold if (and when) computers and robots reign supreme. In a 2012 article he presents the following scenario: Your car is speeding along a bridge at fifty miles per hour when an errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call. Marcus’ example underscores a very serious problem regarding algorithms and computer judgments. That is, when we outsource our control we are also outsourcing our moral and ethical judgment. Let us consider another example. The Impermium corporation, which was acquired by Google in 2014, was essentially an anti-spam company whose software purported to automatically “identify not only spam and malicious links, but all kinds of harmful content—such as violence, racism, flagrant profanity, and hate speech—and allows site owners to act on it in real-time, before it reaches readers.” As Marcus (2015) points out, how does one “translate the concept of harm into the language of zeroes and ones?” Even if a technical operation was possible to do this, there remains the problem that morality and ethics is hardly a universally agreed upon set of ideals. Morality and ethics are, at best, a work-in-progress for humans, as cultural differences and a host of contextual circumstances presents an incredibly complex array of confounding variables. These types of programming decisions could have an enormous impact on the world. For example, algorithms that censor free speech in democratic countries could spark civil unrest among people already suspicious of their government; individuals flagged to be in violation of an offense could have his/her reputation irreparably damaged, be terminated by an employer, and/or charged with a crime(s). When we defer to computers and algorithms to make our decisions for us, we are entrusting that they have all the ‘right’ answers. This is a very scary proposition given the answers fed to machines come from data, which are often messy, out-of-date, subjective, and lacking in context. An additional concern involves the potential to program evil into code. While it is certainly possible that someone could program evil as part of an intentional, malicious act (e.g., terrorism), we are referring to evil in the sense of thoughtless actions that affect others. Melissa Orlie (1997), expanding on the idea of “ethical trespassing” as originally introduced by political theorist Hannah Arendt, discusses the notion of ‘ordinary evil’. Orlie argues that despite our best intentions, humans inevitably trespass on others by failing to predict every possible way in which our decisions might impact others. Thoughtless actions and unintended consequences must, therefore, be measured, included, and accounted for in our calculations and predictions. That said, the ability to do this perfectly in most contexts can never be achieved, so it would seem each day would present a new potential to open Pandora’s Box. Extensions to Psychometrics Some believe the ‘big data’ movement and advances in techniques designed to handle big data will, for the most part, make psychometricians obsolete. No one knows for sure what the future holds, but at present that seems to be a somewhat unlikely proposition. First, members of the psychometric community are notorious for being incredibly tedious with respect to not only the accuracy of information, but also the inferences made and the way in which results are used. Further, it is apparent that the greatest lessons learned from previous algorithmic failures pertains to the unintended consequences, albeit economically, socially, culturally, politically, and legally that may result (e.g., glitches that result in stock market plunges, legal liability for mistakes, increased divisions in political attitudes, etc.). Competing validity conceptualizations aside, earnest efforts to minimize unintended consequences is something most psychometricians take very seriously and already do. If anything, it seems a future in which algorithms are used exclusively could only be complemented by psychometricians who perform algorithmic audits (Morozov, 2013) and think meticulously about identifying various ‘ordinary evils’. Perhaps instead of debating whether robots are becoming more human or if humans are becoming more robotic, we would be better off simply appreciating and leveraging the strengths of both? References Baumann, N. (2013). Too fast to fail: How high-speed trading fuels Wall Street disasters. Mother Jones. Available at: http://www.motherjones.com/politics/2013/02/high-frequency-trading-danger-risk-wall-street Manyika, J., Chui, M., Miremadi, M., Bughin, J., George, K., Willmott, P., & Dewhurst, M. (2017). A future that works: Automation, employment, and productivity. The McKinsey Global Institute. Available at: http://www.mckinsey.com/global-themes/digital-disruption/harnessing-automation-for-a-future-that-works Marcus, G. (2012). Moral machines. The New Yorker. Available at: http://www.newyorker.com/news/news-desk/moral-machines Marcus, G. (2015). Teaching robots to be moral. The New Yorker. Available at: http://www.newyorker.com/tech/elements/teaching-robots-to-be-moral Morozov, E. To Save Everything, Click Here: The Folly of Technological Solutionism (2013). PublicAffairs Publishing, New York, NY. Orlie, M. (1997). Living ethically, acting politically. Cornell University Press, Ithaca, NY. Solomon, K. (2011). Google loses autocomplete lawsuit. Techradar. Available at: http://www.techradar.com/news/internet/google-loses-autocomplete-lawsuit-941498 Sunstein, C. R. (2009). Republic.com 2.0. Princeton University Press, Princeton, NJ.    
A key to quieter seas: half of ship noise comes from 15% of the fleet
Scott Veirs
Val Veirs

Scott Veirs

and 4 more

March 24, 2017
AbstractUnderwater noise pollution from ships is a chronic, global stressor impacting a wide range of marine species. Ambient ocean noise levels nearly doubled each decade from 1963-2007 in low-frequency bands attributed to shipping, inspiring a pledge from the International Maritime Organization to reduce ship noise and a call from the International Whaling Commission for member nations to halve ship noise within a decade. Our analysis of data from 1,582 ships reveals that half of the total power radiated by a modern fleet comes from just 15% of the ships, namely those with source levels above 179 dB re 1 μPa @ 1 m. We present a range of management options for reducing ship noise efficiently, including incentive-based programs, without necessarily regulating the entire fleet.
Grimoire: Using Git for Brain Management
Andrew Egbert

Andrew Egbert

March 02, 2017
I summarize briefly the ideas behind Grimoire / Grok for the purposes of academic reference. Grimoire / Grok is a memory state saving application that is aimed both at expanding the current working space a mind has available, and also keeping track of a larger number of projects / ideas / factoids, than a mind is capable of by itself. This could be useful either for student studying purposes, for researchers, or (hopefully - further research needed) for mental disabilities such as Alzheimer's / dementia, which limit the brains ability to keep track of and recall certain thoughts or memories. For example, one could couple the application with some sort of unobtrusive heads up display, of which several types exist on the market (current the software is rendered in a browser, so it should work on a hud).Grimoire / Grok has two modes. The "Grimoire" mode is aimed at collecting / preserving / updating / keeping track of a large number of segmented thoughts. Thoughts are organized by topic / item, for example: "calculus" / "stokes theorem" is one possible topic / item pair. Users may navigate to different topic / item notes through a central index, a search bar, or by linking from page to page with links. Thoughts are written in markdown / html / latex / javascript, with the primary mode being Markdown. Thoughts are generally separated by Markdown header, which servers a dual purpose for Grok mode. For a folder structure, thoughts are stored by grimoire / topic / item / (files related to this thought). Typically different Grimoires should correspond to contexts, such as 'work', 'home', 'hobby', 'school', 'research' etc. Topics are split up into subareas of context. For instance, school might have topics such as 'geometry', 'language', 'history,' 'art'. Items then deal with specifics. For example, one might store in the programming / C++ /  some items corresponding to strings, math, and so on. Thoughts, as files and a folder structure are tracked through git  \cite{2009} so that any accidental changes can be reverted, and so that a clear progression of thoughts can be maintained. The author imagines that stronger cryptographic guarantees could be given to memory and mental state through signed git messages, although recently the hash function used in git (sha1) has been shown to be non-collision resistant by Google (citation needed), so some changes to the software would likely be necessary for human lifespan length use. Further types of security guarantees are likely possible (and likely desired, if one is to rely on such software for the integrity of ones thoughts).While the Grimoire mode is aimed at context specific long-term memory recall, Grok mode is aimed at short-term, working memory improvement. Grok mode works by first selecting a subset of topics, and then the program proceeds through each item in the topic. Each item is split up by markdown headers. Each header is asked as a question, while the body below the header corresponds to the answer. The user is expected to actively improve and prune each note throughout the process of using grok mode. The user then decides how well they know each topic, and if they know a certain topic, selecting 'good' will increase the time before it is asked about again, similar to the Pimsleur scheduled memory learning \cite{Pimsleur_1967}. In fact, the application uses a simple progression based on the Fibonacci sequence: items are quizzed again after 1, 1, 2, 3, 5, 8, ... days, assuming they are answered successfully each time. There are other software applications which do spaced repetition learning as well, such as Anki (citation needed).  Grimoire / topic / item structure is exactly the same as in Grimoire mode, so that no additional work needs to be done to create quizzes. In this way, a user can quickly refresh a given topic shortly before it is needed.
Trump pode dar certo?
Ronaldo Baltar

Ronaldo Baltar

March 12, 2017
Donald Trump promete fazer a América grande novamente. Para seus seguidores isso significa trazer de volta empresas, criar empregos, acabar com o "globalismo" e fazer o país mais seguro. Há chances dessas medidas propostas pelo Presidente dos Estados Unidos darem certo?
An L3-type silicon photonic crystal cavity with a quality factor exceeding 20 million
Momchil Minkov
Vincenzo Savona

Momchil Minkov

and 1 more

February 22, 2017
ABSTRACT: We present an L3-type photonic crystal cavity in silicon with a theoretical quality factor of 20.9 million. This highly-optimized design is made by shifting the positions of the holes surrounding the cavity, and was obtained through an automated global optimization procedure.
Building a functional connectome of the Drosophila central complex
Romain Franconville
Celia Beron

Romain Franconville

and 2 more

January 08, 2018
The central complex is a highly conserved insect brain region composed of morphologically stereotyped neurons that arborize in distinctively shaped substructures. The region has been implicated in a wide range of behaviors, including navigation, motor control and sleep, and has been the subject of several modeling studies exploring its circuit computations. Most studies so far have relied on assumptions about connectivity between neurons in the region based on their overlap in light-level microscopic images. Here, we present an extensive functional connectome of Drosophila melanogaster's central complex at cell-type resolution. Using simultaneous optogenetic stimulation, GCaMP recordings and pharmacology, we tested the connectivity between over 70 presynaptic-to-postsynaptic cell-type pairs. The results reveal a range of inputs to the central complex, some of which have not been previously described, and suggest that the central complex has a limited number of output channels. Additionally, despite the high degree of recurrence in the circuit, network connectivity appears to be sparser than anticipated from light-level images. Finally, the connectivity matrix we obtained highlights the potentially critical role of a class of bottleneck interneurons of the protocerebral bridge known as the Δ7 neurons. All data is provided for interactive exploration in a website with the capacity to accommodate additional connectivity information as it becomes available. Raw data and code are made available as an OpenScienceFramework project.
Transforming ANOVA and Regression statistics for Meta-analysis
David LeBauer

David LeBauer

March 07, 2020
INTRODUCTION When conducting a meta-analysis that includes previously published data, differences between treatments reported with P-values, least significant differences (LSD), and other statistics provide no direct estimate of the variance. ESTIMATING STANDARD ERROR FROM OTHER SUMMARY STATISTICS (_P_, _LSD_, _MSD_) In the context of the statistical meta-analysis models that we use, overestimates of variance are okay, because this effectively reduces the weight of a study in the overall analysis relative to an exact estimate, but provides more information than either excluding the study or excluding any estimate of uncertainty (though there are limits to this assumption such as ...). Where available, direct estimates of variance are preferred, including Standard Error (SE), sample Standard Deviation (SD), or Mean Squared Error (MSE). SE is usually presented in the format of mean (±SE). MSE is usually presented in a table. When extracting SE or SD from a figure, measure from the mean to the upper or lower bound. This is different than confidence intervals and range statistics (described below), for which the entire range is collected. If MSE, SD, or SE are not provided, it is possible that LSD, MSD, HSD, or CI will be provided. These are range statistics and the most frequently found range statistics include a Confidence Interval (95%CI), Fisher’s Least Significant Difference (LSD), Tukey’s Honestly Significant Difference (HSD), and Minimum Significant Difference (MSD). Fundamentally, these methods calculate a range that indicates whether two means are different or not, and this range uses different approaches to penalize multiple comparisons. The important point is that these are ranges and that we record the entire range. Another type of statistic is a “test statistic”; most frequently there will be an F-value that can be useful, but this should not be recorded if MSE is available. Only if there is no other information available should you record the P-value.
Angle-resolved RABBIT: theory and numerics
Paul Hockett

Paul Hockett

March 24, 2017
ABSTRACT Angle-resolved (AR) RABBIT measurements offer a high information content measurement scheme, due to the presence of multiple, interfering, ionization channels combined with a phase-sensitive observable in the form of angle and time-resolved photoelectron interferograms. In order to explore the characteristics and potentials of AR-RABBIT, a perturbative 2-photon model is developed; based on this model, example AR-RABBIT results are computed for model and real systems, for a range of RABBIT schemes. These results indicate some of the phenomena to be expected in AR-RABBIT measurements, and suggest various applications of the technique in photoionization metrology.
Positive biodiversity-productivity relationship predominant in global forests

Jingjing  Liang et al.

February 01, 2017
Jingjing  Liang1*, Thomas W. Crowther2, Nicolas Picard3, Susan Wiser4, Mo Zhou1, Giorgio Alberti5, Ernst-Detlef Schulze6, A. David McGuire7, Fabio Bozzato8, Hans Pretzsch9, Sergio de-Miguel10,11, Alain Paquette12, Bruno Hérault13, Michael Scherer-Lorenzen14, Christopher B. Barrett15, Henry B. Glick16, Geerten M. Hengeveld17,17.5, Gert-Jan Nabuurs17,17.6, Sebastian Pfautsch18, Helder Viana19,20, Alexander C. Vibrans21, Christian Ammer22, Peter Schall22, David Verbyla23, Nadja Tchebakova24, Markus Fischer25,26, James V. Watson1, Han Y.H. Chen27, Xiangdong  Lei28, Mart-Jan Schelhaas17, Huicui Lu29, Damiano Gianelle30,31, Elena I. Parfenova24, Christian Salas32, Eungul Lee33, Boknam Lee34, Hyun Seok Kim34,35,36,37, Helge Bruelheide38,39, David A. Coomes40, Daniel Piotto41, Terry Sunderland42,43, Bernhard Schmid44, Sylvie Gourlet-Fleury45, Bonaventure Sonké46, Rebecca Tavani47, Jun Zhu48,49, Susanne Brandl9,49.5, Jordi Vayreda50,51, Fumiaki Kitahara52, Eric B. Searle27, Victor J. Neldner53, Michael R. Ngugi53, Christopher Baraloto54, Lorenzo Frizzera30, Radomir Bałazy55, Jacek Oleksyn56, Tomasz Zawiła-Niedźwiecki57, Olivier Bouriaud58,58.5, Filippo Bussotti59, Leena Finér60, Bogdan Jaroszewicz61, Tommaso Jucker40, Fernando Valladares62, Andrzej M. Jagodzinski56,63, Pablo L. Peri64,65,66, Christelle Gonmadje46,67,William Marthy68, Timothy O'Brien68, Emanuel H. Martin69, Andrew R. Marshall70,70.5, Francesco Rovero71, Robert  Bitariho72, Pascal A. Niklaus73,74, Patricia Alvarez-Loayza75, Nurdin Chamuya76, Renato Valencia77, Frédéric Mortier78, Verginia Wortel79, Nestor L. Engone-Obiang80, Leandro V. Ferreira81, David E. Odeke82, Rodolfo M. Vasquez83, Simon L. Lewis84,85, Peter B. Reich18,86
Without Data, Are We Just Telling Nice Stories?
Josh Nicholson

Josh Nicholson

February 20, 2017
At the foundation of research is data.  The papers we write and the figures we make revolve around it and it is what we spend countless hours collecting. And yet, most raw data remains absent from major studies \cite{Alsheikh_Ali_2011}.  This is a problem that has received much attention the past few weeks,  with preliminary findings being released from the Cancer Reproducibility Project, a large multi-year effort to see how robust top cancer studies are \cite{2017}.  Like previous studies in psychology \cite{2015} and cancer \cite{Begley_2012}, the findings from the reproducibility project, that a large percentage of findings are irreproducible or at least very difficult to reproduce raise serious questions and doubts about how we conduct and communicate our research.Authorea was founded to reinvent the research article so that it is data-rich, interactive, transparent, and replicable.  Not only did we want to make Authorea a place where researchers could collaborate easier and communicate their results more quickly, we also wanted to make sure that the data behind the study could be easily shared.  This is why each article on Authorea is a repository in itself that allows you to host data directly within your article.  We enabled integrations with Jupyter notebooks and various data visualization tools not just to make the document more aesthetically pleasing, but to make it easier to analyze each other's work.  A quote in The Atlantic summarized one problem we're working to fix quite well:"If people had deposited raw data and full protocols at the time of publication, we wouldn’t have to go back to the original authors," says Iorns. That would make it much easier for scientists to truly check each other’s work.- The AtlanticWe believe that static snapshots of research living in PDFs behind paywalls are inimical to the advancement of research and the findings from the various efforts looking at reproducibility in research support this.  Authorea is first and foremost a modern collaborative editor--we want to make it easy to write your work and utilize the power of the web-- but we're much more than this, with preprint capabilities (DOIs coming soon), direct submissions to journals, and data hosting, we are working to make research communication more robust on numerous levels. Why should the most important documents in the world be shared and disseminated so poorly? They don't have to be and in fact, we're seeing encouraging signs that the next generation of researchers will do it differently.The following are just a few student papers all utilizing open data sets and analyses on Authorea.Analysis of ground-level ozone formation and its correlation with concentration of other pollutants and weather elementsVision Zero Crash Data AnalysisWe hope you'll join us and write your next paper with us.  How we make research more robust as a community starts with us as individuals.  
Meet DAD, the dynamic assessment dashboard
Raphaël Grolimund

Raphaël Grolimund

February 23, 2017
AbstractThis paper doesn't present the findings of an experiment. It presents a tool created (and still under active development) to put an specific teaching method, dynamic assessment, into practice through an online dashboard.This paper explores if students felt comfortable with this teahing method, if it helped them take control of their learning and how they felt with this dashboard.Even if the tasks to do where both individual and group tasks, only the individual activities are analyzed. This tool is used for two years, but data presented in this paper are only those collected last year (Fall 2016). Data were anonymised, cleaned and published on Zenodo (10.5281/zenodo.290129).IntroductionIn higher education, most of the time students are evaluated by mid-term and/or final exams. This means that the student's understanding and learning is evaluated on predefined day and that (s)he has to succeed that day. Failing is not allowed, however it could be good to help students learn. The idea was to allow students to fail thanks to a dynamic assessment \cite{sharples_innovating_2014}. Instead of giving student only one bullet, it allows them to fail and learn to improve until they succeed.The Dynamic assessement dashboard (DAD) has been created to assess students dynamically throughout the semester. Giving them the control on their learning (pace, tasks) leads to self-regulation \cite{hattie_visible_2012} and was expected to increase students' motivation. Getting a bonus for completing a set of tasks includes gamification features that supports students' engagement \cite{Hamari_2016}.DAD also includes some gamification mechanics like bonuses \cite{Deterding_2011,muletier_gamification:_2014}. DAD has been created as a personal dashboard. A student can't access another student's dashboard and achievements.DAD is intended to increase students' self-efficacy \cite{Zimmerman_2000} whatever their learning style is. The mix of individual and group activities should help students reach the zone of proximal development as defined by Vygotsky \cite{vygotsky_interaction_1978}.DADThe idea of DAD is born from the combination of the reading of the Open University's Innovating pedagogy 2014 report \cite{sharples_innovating_2014} and the observation of how young children's learning is assessed. The former presents the concept of dynamic assessment to give the learner personalized assessement and the latter is based on simple stamps indicating when a task has been successfully achieved. DAD is an attempt to put that in an online dashboard that displays activities defined by the teacher. All tasks are meant to help students reach the course's objectives. Students choose what to do and when to do it. If the teacher allows it, they can even choose if they want to do it or not.
Os cinco macacos e o pensamento crítico
Ronaldo Baltar
Claudia Siqueira Baltar

Ronaldo Baltar

and 1 more

April 01, 2022
De tempos em tempos, circula pela Internet a estória motivacional dos “cinco macacos”. E sempre resulta em muitos comentários positivos. Neste início de ano, não foi diferente. Várias postagens, em diferentes redes, lembraram a estória que estimula as pessoas a pensarem diferente do senso comum. Uma espécie de convite ao pensamento crítico.Resumidamente, para quem nunca recebeu um post ou e-mail com essa narrativa, a estória se inicia com o relato de um experimento científico. Um grupo de pesquisadores pendurou um cacho de bananas no teto de uma jaula com uma escada embaixo. Na jaula havia cinco macacos. Quando um dos macacos, após algum tempo observando a situação, subiu na escada para pegar as bananas, todos receberam um jato d’água fria. Passado algum tempo, outro macaco tenta subir na escada e todos novamente são alvo do jato de d’água. Logo, quando um dos macacos demonstra a intenção em subir a escada, os demais o impedem. O experimento segue, um dos macacos é trocado e não há mais jato d’água. Quando o novato tenta subir na escada para pegar as bananas, os quatro que presenciaram a situação anterior o impedem. O novato tenta e novamente é impedido. Os macacos são trocados um a um, e a cena se repete.Ao final do experimento, mesmo sem ter presenciado a situação desagradável inicial, os macacos não tentam mais subir na escada para pegar a banana.Com essa ilustração, o texto quer instigar as pessoas a serem críticas, proativas e inovadoras. A mensagem é: ao continuar a fazer as coisas do jeito que todos fazem você pode estar perdendo oportunidades que só conhecerá se se arriscar.Desde que lançada, a estória tornou-se viral. Apareceu inicialmente em 2011, no blog do escritor Michael Michalko, autor de vários bons textos motivacionais sobre criatividade nos negócios, entre os quais: “Creative Thinkering: putting your imagination to work”.O autor convida o leitor a ter uma visão crítica de si mesmo: será que você não é como um macaco do experimento, aquele que reproduz o mesmo jeito de fazer as coisas sem saber o motivo?Você já se sentiu repreendido pelo grupo quando tentou fazer algo diferente? Provavelmente a grande maioria dos leitores dirá sim a estas perguntas embutidas no texto. Talvez isso explique o sucesso que essa estória faz.Desde que recebi pela primeira essa mensagem (e já foram inúmeras!), chamou-me a atenção a ampla aceitação positiva dessa narrativa. Parece demonstrar que muitas pessoas não querem parecer conformadas e buscam ter um pensamento crítico em relação ao senso comum.Uma postura proativa e inovadora, requer de fato um pensamento crítico e criativo. E pensamento crítico significa rever conceitos pré-estabelecidos.Mas a inovação se faz a partir do acúmulo de conhecimento, não da negação da experiência adquirida como indiretamente sugere a estória dos cinco macacos. Além disso, a inovação depende da capacidade das instituições na criação de um ambiente inovador.A estória dos cinco macacos enfatiza que quem tolhe as iniciativas são os iguais, os colegas de trabalho. Mas na verdade, são as instituições, não os indivíduos, que criam um ambiente favorável ou inibidor da crítica e da diversidade de ideias.Sabe-se que o ponto de partida do pensamento crítico está na problematização da realidade por meio de informações e conhecimento sobre a realidade. O próximo passo consiste em separar, organizar, classificar, hierarquizar os fatos conhecidos. Com base em análise e método, faz-se a proposição de alternativas mais adequadas para o problema inicial. Daí surge a verdadeira inovação.A criatividade que retira soluções do nada é magia. A criatividade que formula soluções a partir da análise da experiência acumulada, está sim gera conhecimento e tem impacto inovador.A narrativa dos cinco macacos induz o leitor a crer que se está diante de uma experiência científica verdadeira. O curioso é que o adjetivo científico deveria significar exatamente o oposto. Um conhecimento científico é aquele obtido por um método demonstrável e passível de ser questionado. Mas é tratado erroneamente como uma afirmação de verdade inquestionável.A “experiência científica” que deu origem à estória dos cinco macacos não existiu. É uma narrativa ficcional criada por Michalko. Supõem-se que tenha sido inspirada pelo experimento (este sim real) do Prof. Gordon Stephenson, do Departamento de Zoologia da Universidade de Wisconsin, publicado em 1966, no artigo: Cultural Acquisition of a Specific Learned Response among Rhesus Monkeys .No artigo do Prof. Stephenson, pares de macacos Rhesus são usados para testar se há transmissão de conhecimento entre essa espécie. Não são cinco macacos, não há banana pendurada no teto, não há jato d’água fria.A pergunta da pesquisa do Professor de Zoologia de Wisconsin era bem mais objetiva: há transmissão de comportamento adquirido entre os animais?No experimento real, os pares eram compostos por um animal condicionado a evitar um alimento (com jatos de ar, não água) e outro não condicionado. Stephenson queria saber se o animal condicionado (ele chamava de “demonstrador”) transmitiria o seu “conhecimento” para aquele não condicionado (que ele denominava de “ingênuo”).Quem ler o artigo do Prof. Gordon Stephenson verá que a conclusão do estudo é bem diferente da conclusão do texto sobre os cinco macacos. Na pesquisa real, em alguns pares, o macaco ingênuo copiou o comportamento do macaco condicionado (como reproduzido na estória dos cinco macacos). Em outros pares não. Houve pares em que se deu o contrário, o animal “ingênuo” acabou influenciando o macaco demonstrador, que passou por cima do seu condicionamento inicial e comeu o alimento (o oposto da estória dos cinco macacos). A narrativa dos cinco macacos não tem relação alguma com a realidade.Na ficção dos cinco macacos, o novato é impedido pelos outros de se aproximar da escada, pois os veteranos sabem, por experiência o que receiam: o jato d’água fria. O macaco novato ignora o alerta e é repreendido pelos demais. O macaco novato se conforma. O leitor se identifica com o macaco novato e lamenta todas as vezes que teve uma iniciativa e foi tolhido. Diante de uma situação análoga, o leitor é incentivado a não se conformar com os veteranos e ir adiante atrás da sua banana.A intenção aqui não é fazer uma crítica ao texto de Michalko, muito menos ao trabalho acadêmico do Prof. Gordon Stephnson. O objetivo é analisar a forma como a mensagem dos cinco macacos é interpretada, apontando exclusivamente a superação individual como caminho para inovação.Para tanto, vamos ver esse experimento imaginativo por outro ângulo. Note que apenas os observadores (os pesquisadores na estória fictícia) sabem que não há mais jato d’água. Os macacos não sabem se haverá ou não jato d’água fria e estão confinados em uma jaula, não têm por onde sair. Logo, para os cinco macacos, a água fria continua a ser uma possibilidade concreta. O risco é real. Quem impõe o risco e incentiva o comportamento não criativo são os empreendedores do experimento, no caso, os cientistas imaginários.A narrativa humaniza os possíveis comportamentos dos cinco macacos. Seguindo essa mesma linha, vamos supor que o novato, ao entrar na jaula, não fosse informado pelos demais sobre o perigo de se aproximar das bananas. Os veteranos sabiam, mas não disseram nada. Seria essa uma atitude racional para o grupo? Certamente, não. Os macacos fictícios, ao socializarem sua experiência, minimizaram o risco existente, representado pelos gestores do experimento que continuavam com a mangueira a postos. Ou seja, para eles, compartilhar a informação era uma forma de minimizar o risco a que todos estavam submetidos.Ao final do experimento, nenhum dos cinco macacos tinha visto de fato o jato d’água. Mas, mesmo assim, por que teriam motivos para desconfiar da informação repassada a eles? Os primeiros macacos de fato receberam a carga desagradável de água fria. Essa era a única informação concreta disponível.Você, se estivesse lá, arriscaria subir a escada, descartando a experiência dos seus companheiros? Se um macaco novato tentasse subir a escada, o que você faria? Incentivaria o novato ou tentaria dissuadi-lo, sabendo que as consequências do ato dele recaria sobre todos?A estória é muito usada para motivar pessoas. O foco recai exclusivamente sobre o comportamento do indivíduo. Mas, na realidade, o problema maior está nas próprias instituições: a jaula, o jato d’água controlado pelos os observadores de fora.Apostar que um indivíduo possa ir contra o senso comum de uma empresa ou instituição não preparada para mudanças é ingênuo.Talvez, os macacos da estória não fossem tão conformados assim. Simplesmente, não confiavam que os gestores de fora da jaula (as instituições) não iriam jogar-lhes água fria. Você confiaria?A estória dos cinco macacos é apenas uma versão pseudocientífica dos velhos adágios “gato escaldado tem medo de água fria” e "macaco velho não põe a mão em cumbuca".Desprezar a experiência acumulada é um erro grave. O pensamento crítico não significa apenas olhar para o lado oposto e fazer o que outros não fazem. Quem aponta defeitos pode ser um crítico no sentido corriqueiro da palavra, mas não significa necessariamente ser uma pessoa com pensamento crítico voltado à inovação. Até porque, nem todos que identificam um problema corretamente, têm uma solução correta para o mesmo problema. Por isso, inovar requer um pensamento crítico coletivo, cujo ponto de partida está na experiência concreta acumulada pelo grupo.O desafio é ir além do conformismo sem cair na ingenuidade voluntarista. A solução está em usar a experiência e o conhecimento da situação, incluindo erros e acertos, como a base sobre a qual se deverá erguer o olhar em busca de novas soluções, bem além do alcance do senso comum. Este é o pensamento crítico inovador.Mas, não basta apenas a motivação pessoal. É necessário que haja liberdade para o pensamento e para a crítica. E essa situação só ocorre em ambientes institucionais que garantem as condições para a livre reflexão sobre a experiência comum adquirida, sem mangueiras reais ou imaginárias apontando para todos.A motivação individual é a base, mas a inovação não é produto da ação individual isolada. Resulta de um ambiente favorável comum. Por isso, é papel das instituições criar as condições coletivas para a inovação e a criatividade, começando por compartilhar experiências, dar confiança e garantir que o pensamento novo, a diversidade de ideias e de opiniões não serão tratados com água fria.
Avoiding plagiarism : the road to autonomy
Raphaël Grolimund
Noémi Cobolet

Raphaël Grolimund

and 1 more

February 23, 2017
IntroductionInformation Literacy \cite{zurkowski_information_1974} is for decades the playground for teaching librarians. But as the understanding of this concept is limited outside of the libraries, frameworks all around the world \cite{american_library_association_information_2000,bundy_australian_2004,adbu_referentiel_2012,deutscher_bibliotheksverband_standards_2009} including Switzerland \cite{informationskompetenz_referentiel_2011} have been created in order to advertise the meaning and importance of Information Literacy.Aside from the meaning of Information Literacy, the first challenge to face when teaching transferable skills is that everyone feels competent. After all, everyone uses transferable skills everyday. The problem is that these skills are used, but not mastered. Then, the first goal is to turn students from an unconscious incompetent student (they don't know that they don't know) into a conscious incompetent one (they know that they don't know) \cite{allan_no-nonsense_2013}. Once, they realise that they don't know, they feel the need to learn something new to fix this.
DOI Test
Jan Krause

Jan Krause

February 23, 2017
...
Método de Nelder-Mead (\(n\) dimensões) - Downhill Simplex
Daniel Simões

Daniel Simões

July 20, 2020
Considere f(x₁, x₂, …, xn) a função a ser minimizada. Define-se avaliar um ponto como calcular o valor da função nesse ponto. 1 - Defina n + 1 pontos iniciais com n dimensões. xi = (xi1,xi2,…,xin), em que 1 ≤ i ≤ n + 1. Ordene e renomeie de forma que f(x₁)<f(x₂)<…<f(xn + 1). 2 - Calcule o centróide xg = (xg1,xg2,…,xgn) dos n pontos com menor avaliação: $\displaystyle x_{gj} \leftarrow {n}^{n + 1}x_{ij}$, 1 ≤ j ≤ n. 3 - Calcule o ponto de reflexão xr = (xr1,xr2,…,xrn): rri ← xgi + α(xgi−x(n + 1)i), 1 ≤ i ≤ n. Avalie esse ponto: f(xr). 4 - Se f(x₁)<f(xr)≤f(xn), então faça x(n + 1)j ← xrj, 1 ≤ j ≤ n. Ordene os pontos por ordem crescente de avaliação e vá para o passo 2. 5 - Se f(xr)≤f(x₁), então calcule o ponto de expansão xe = (xe1,xe2,…,xen): xej ← xrj + β(xrj−xgj), 1 ≤ j ≤ n. Avalie esse ponto: f(xe). 6 - Se f(xe)≤f(xr), então faça x1j ← xej e xij ← x(i − 1)j, 1 ≤ j ≤ n + 1, e vá para o passo 2. Senão, faça x1j ← xrj e xij ← x(i − 1)j, 1 ≤ j ≤ n + 1, e vá para o passo 2. 7 - Se f(xr)>f(xn), então calcule o ponto de contração xc = (xc1,xc2,…,xcn): xcj ← xgj + γ(x(n + 1)j − xgj), 1 ≤ j ≤ n. Avalie esse ponto: f(xc). 8 - Se f(xc)≤f(xn + 1), então faça x(n + 1)j ← xcj, 1 ≤ j ≤ n. Ordene os pontos por ordem crescente de avaliação e vá para o passo 2. 9 - Se f(xc)>f(xn + 1), então realize uma contração ao longo de todas as dimensões em direção ao ponto x₁: xij ← xij + ν(xij − x1j), 2 ≤ i ≤ n + 1, 1 ≤ j ≤ n. Ordene os pontos por ordem crescente de avaliação e vá para o passo 2. Valores recomendados: α = 1, β = 1, γ = 0, 5 e ν = 0, 5.
← Previous 1 2 … 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 Next →

| Powered by Authorea.com

  • Home