
Summer 2022


Welcome To Our Summer 2022 Academy Quarterly Review
As the Zoom screen shot of the incoming Board attests, this is a moment of transition for the Academy. The return to in-person workshops in our new quarters at Tartu is the most anticipated change, but for me, personally, the more daunting shift is taking over the editorship of the AQR from Linda Tu. In the past month, the famous Nineteenth-Century Punch cartoon, ‘Dropping the Pilot,’ has often crossed my mind. It depicted the recently installed Kaiser Wilhelm smugly witnessing the departure of Otto Bismarck, the far more adroit Chancellor he had just dismissed. Linda, of course, would be welcome to stay on if she so desired. She is Bismarckian only in the sense of having a sustained record of accomplishments. The good news is that she has promised to remain as an active AQR contributor.
If there is a common theme to this issue, perhaps it is the way patterns shape our perceptions of the world. Bob Fabian’s contribution to the “Human Brain: Refreshed?” workshop, described in Don Plumb’s report, delineates the fundamental process of how billions of neurons in the brain interact to identify and interpret recurring patterns. The “Gender’s Voice” workshop adopted the consistent rhythms of rap poetry to identify and critique some of society’s most entrenched assumptions about aging. In a second contribution to this issue, Don Plumb offers some very valuable advice on the effective use of PowerPoint for workshop presentations, implicitly commenting on the ways in which the brain processes information. Ron Miller’s fascinating discussion of Automatic Voice Recognition technology points to the practical applications of pattern recognition that underlie the whole field of Artificial Intelligence. Matt Segal’s account of his excursion into the heart of Nova Scotia lobster war territory reminds us of the anxiety produced when traditional patterns of human activity are disrupted, but at the same time his evocative photographs affirm the comforts of summer in the endless cycle of the seasons. Tanya Long, in her second contribution, brings us back to basic neural processes in a passionate review of Richard Power’s novel, Bewilderment, which questions whether consistent forms of life actually do inhabit the cosmos.
As always, we would be delighted to get feedback on the contents of the AQR and to see it function as a springboard to wider conversations within the Academy. To that end, we ask you to consider Tanya’s assertion that Richard Powers is the most significant English language novelist writing today. Who would you nominate as the most significant contemporary novelist – significant in the larger scheme of things or significant for you personally - writing in English or any language?
Speaking of wider conversations, we have an additional request, made on behalf of the Marketing Committee, tasked with devising strategies to expand the Academy’s membership. Word of mouth remains one of the most effective advertising techniques. With Covid restrictions loosened and people beginning to rebuild face-to-face networks, now is the perfect time to recommend this amazing organization to friends and acquaintances. Sustaining the Academy in this way is an essential undertaking for all of us.
Keith Walden
What's On At The Academy
Workshop Presentation Focus: The Stuff of Thought
A winter session of “The Human Brain: Refreshed?” workshop featured a presentation by Bob Fabian on “The Stuff of Thought.” Bob had been interested in cognitive science for a long time, and presenting gave him an incentive to deepen his understanding. His research included library books and encyclopedias as well as the Internet. His information was presented on a PowerPoint with a list of suggested reference materials, including what he described as a small library of relevant texts he found on book4you.org.
Bob started by asking the question “What is the stuff of thought?”. His simplified answer was patterns, their recognition and formation. Our brain recognizes these patterns based on past experience. The concept of chair, built up through observation of the attributes of many observed chairs, allows us to recognize a new one. Processing of the chair pattern is automatic. Similarly, melody patterns are recognized automatically. Bob used an amusing YouTube clip of the pianist Nicole Pesce to demonstrate how we can recognize not only a tune (Happy Birthday) but also the composer style (Bach or Chopin) with no conscious thought required. Mental patterns also allow us to recognize types of people: friend, enemy or lover. The familiar nature-nurture points of view debate whether our social culture involves the basic pattern types we are born with or develop as we grow.
Patterns are a consequence of the neurological structure of a brain that does not stand alone, but rather is actively engaged with its body and its environment. The human brain contains 100 billion neurons and their chemical and electrical interactions correlate with the patterns that we call thought. Each kind of neuron has its own place in the brain and can connect to up to 10,000 other neurons. Moreover, there are numerous medical implications of a neurological interpretation of thought. So-called “zipper” molecules make synapses dysfunctional and may be linked to brain disorders such as epilepsy, autism and schizophrenia.
A wide-ranging discussion dealt with questions about whether the stuff of thought might be prediction or memory, the relationship between neurons and dementia, and the discussion of music appreciation in different cultures. Bob’s central idea was the role of adaptable, flexible patterns of thought and their relationship to neurons. Everything that happens in our minds happens because a particular neuron becomes the focus. As Bob had hoped in his goals for the presentation, his session did indeed “stimulate thought about thought.”
Don Plumb
Workshop Report: Gender's Voice
One of the Academy workshops I participated in this year was Gender’s Voice, an exploration of gender issues through video recordings, fiction, film and other forms of art. We listened to powerful speeches by Michelle Obama, Chimamanda Ngozi Adichi, Alexandria Ocasio-Cortez, Leymah Gbowee and others. We read novels, discussed art works by women and watched contemporary movies such as Carol and The Favourite.
One of the sessions was on slam poetry, a form of spoken word, a bit like rap but without the music and crudeness. None of us was very familiar with slam poetry and no one volunteered to take it on. Encouraged by our facilitators Trudy Akler and Donna Reid, we watched 14 slam poems by young women “spittin’ some fierce feminism” on subjects ranging from appearance to rape culture, abortion, periods and breasts. The one exceptional man spoke about being a man and wanting to “do it right.” The poets were all young and driven by considerable anger.
Which got us to thinking – what would a slam poem written by our group of not-so-young participants look like? Here is the result, collectively created by Trudy Akler, Donna Reid, Terry Murray, Valerie Melman, Stephen Johnson, Marilyn Tate, Rhona Wolpert, Sally McLean, Nancy Kraft, Anne Fourt, Maureen Fitzgerald and Tanya Long.
Tanya Long
RAGING FOR AGING
I’m engaging with aging
Raging at the slogans of ageists that say:
“Youth is Beauty.”
“Old is Glorious and Powerful.”
Advertising debases public spaces.
Don’t tell us we look young:
“Do you think she’s had work done?”
Perseverance is key
We have fought to be who we were meant to be
You need to think more critically
like us ‘old folks’.
Be silent. You can’t
Listen. You won’t
Learn. You must,
Your fight awaits….
Dying with or dying of
Co-morbidities
Language denying my worth
Ignoring my contributions.
Saying my prime has passed
Pushing me out of tasks
… the race
But I gave my soul, my mind
And time to this place.
We dance.
Of course we dance, we’ve always danced.
I still snowshoe at 85.
If the world is here in 10 years
I may bend like a snow-laden yew tree
But I will not break.
Getting older, getting bolder,
yet feeling the cold in my thinning skin;
still looking over my shoulder
as I begin to be prey
for the growing frauds of today.
Keeping my eyes wide and wise
for each day’s beauty and surprise
For now, I’m staying right here on the pages
I won’t be written out of life.
Still engaged
Sometimes enraged
Always assuaged
By the next chapter.
Opinions
The Seven Deadly Sins of PowerPoint (and Their Solutions)
PowerPoint is presentation software that is widely used in industry and education, but often it is used less effectively than it could be. The author worked in distance education at TVO, where consultants were engaged to train writers and presenters on the best ways to communicate information on screen. These lessons also apply to the presentations that we see every week at the ALLTO. The following article offers a simple set of rules for making your PowerPoints more powerful and effective.
A theologian has stated that the seven deadly sins are so-called because they lead to the “death of the soul”. These sins, including pride or envy or sloth, can be overcome with corresponding virtues like humility or diligence.
In a poorly planned PowerPoint, the audience survives the experience but the presenter wastes an opportunity to communicate ideas effectively. But there is hope. The “seven deadly sins of PowerPoint” can be identified and, more importantly, can be overcome with some simple rules.
Deadly Sin # 1: Sentences
Information is presented in full sentences.
Why a Problem? Full sentences take up valuable space, and encourage Deadly Sin #2.
The Solution: Use point form, not sentences.
Deadly Sin # 2: Reading Verbatim
The presenter reads exactly what is already on the slide.
Why a Problem? Our reading speed does not match our listening speed. Reading confuses rather than reinforces your message. The audience can become bored and stop listening.
The Solution: Do not read straight from the presentation.
Talk to and elaborate on your key points (already in point form on your slide).
Deadly Sin # 3: Overpacking
Too much information is packed on a slide, with too many lines and too many words.
Why a Problem? There is too much content for one slide. Sometimes the main ideas are lost, because there is just too much there, often producing Deadly Sin #4.
The Solution: Use the 8 x 8 Rule for each slide:
8 or fewer lines per slide, and 8 or fewer words per line.
Deadly Sin # 4: Illegibility
The font size is too small, too ornate, too light, or too complicated with bold or italicized fonts.
Why a Problem? The information is too difficult to read. If you crowd in too much text, the audience can’t or won’t read it.
The Solution: Use just one simple font (e.g., Arial, not Olde English.)
Use minimum 28-point font size in the body, and minimum 36-point for headings.
Deadly Sin # 5: Shouting
Titles and key points are written in ALL CAPITALS.
Why a Problem? Research has shown that use of all uppercase or all lowercase letters in words makes information harder to process. A combination is optimal.
The Solution: Use upper/lowercase combinations for titles (e.g., Solution).
Use regular sentence case for your point form information.
Deadly Sin # 6: Distraction
Too many different colours or fonts or backgrounds are used. The fonts chosen do not contrast enough with the background and are hard to read.
Why a Problem? The key content can be lost if the backgrounds, colours, and fonts are too complicated, or are difficult to distinguish from each other. The audience is distracted from the message.
The Solution: Use one simple slide template.
Use readable contrasting font colours (2 or 3 maximum).
Deadly Sin # 7: Boredom
There are no visuals or photos for interest.
Why a Problem? The presentation is less interesting and memorable than it could be if images and photos are used when appropriate.
The Solution: Use relevant visuals for interest (but don’t overload them)
Don Plumb
Technology in Our Lives

Automatic Voice Recognition
“By 2019 over 100 million Alexas had been purchased; the Google assistant APP is available on over 1 billion devices”
Introduction
After a lively discussion in Sandra Linton’s Special Interest Group on the topic of Voice Artificial Intelligence (AI) this April, I was inspired to give some thought to my own recent experience. I had been using a smart cable TV remote that coincidently had the same technology in Florida as in Toronto; I was commanding Alexa to do various “media control” tasks and using “hands free” for my cell phone in the car ( it even had “text to voice” for text messages). After years of being told to push a button on my phone by some exasperating IVR (Interactive Voice Response) system, was I getting revenge by ordering machines around? I am sure some of you have had similar experiences. So how advanced is automatic voice recognition (AVR) today? Are we slowly being surrounded by an Internet of things and a pervasive electronic “envelope”?
Good questions all. Where previously I had written about “touch,” this time it’s about “voice.” We are headed for a future where we will have a lot more voice interactions with machine-based systems in health care, home automation and entertainment. Whereas we have been interacting with our computers and tablets mainly using a “Graphics User Interface,” we interact with this new technology using a “Voice User Interface.”
The following should not be taken as recommendations for any specific product but are intended to explore and describe some personal early experiences with the technology. I will have a more in-depth discussion of the privacy issues and implications for the future in Part 2 in a next year’s AQR.
History
Demonstrated at the 1962 World’s Fair, IBM’s “Shoebox” device was able to recognize 16 words. About 30 years later, first in 1997, Call Centers started using Interactive Voice Response. This was likely our first not-so-satisfactory experience of talking to a machine. A very significant step was taken by Geoffrey Hinton at U of T in using deep neural networks (DNN) AI for the problems of image recognition and, with two of his students in 2012, won an international competition where visual recognition software for the first time rivalled human accuracy. Geoffrey Hinton has been called the godfather of deep learning and has worked part time with Google since 2013 leading their AI efforts.
Here are some key events in the industry timeline of speech recognition.
2004: Microsoft adds “speech” to Microsoft Office
2006: Google launches Google Voice search application
2012: Amazon granted the first patent on the Alexa voice assistant
2015: Google shows a 49% performance improvement using a CTC-trained LSTM ( a neural network approach for language recognition)
2017: Sonos introduces the first Wi-Fi “smart audio speaker” that can connect to the Internet. Note: this is a highly significant development that has revolutionized the consumer music industry.
2018: Amazon introduces Alexa and the Amazon Echo Dot products
: Google introduces the Google “smart” home assistant and the Google Pixel phone
2019: 100 million Alexas purchased; Google Assistant is available on 1 billion devices
: Facebook launches a stand-alone video calling and messaging product with a 10-inch display and Alexa functionality.
First, the Virtual Digital Assistants. Some of us may already have experience with Alexa, Siri or the Google Assistant. These are called Virtual Digital Assistants (VDAs) and are a type of software that uses natural language processing and certain features of artificial intelligence to communicate with users. They can carry out tasks such as controlling smart speakers, home automation devices and “Ring” Video doorbells and are integrated into smartphones. This is becoming an important feature for consumer electronics.
What languages can Alexa, Siri and the Google Assistant “understand”?
You will encounter Siri in Apple iPhones and iPads. Currently it can work with 18 languages. Alexa is the Amazon VDA and understands commands in 8 languages. The Google Assistant understands 44 languages on smartphones and 13 on the Google Home for smart home automation devices, e.g., lighting, thermostats, switches and doorbell cameras. There is on-going competition for market share among Google and Amazon(Alexa) for control of the rapidly expanding availability of home automation products.
Privacy concerns
The local device listens for a “wake” word and then contacts “cloud” services. It is one thing to listen and quite another to store records. The local device, be it an Amazon Echo Dot or Google Home Mini/Google Nest Mini, only has enough computer power to recognize the wake word such as “ Alexa” or “Hey Google” and typically keeps only 3 seconds of audio in an updating memory buffer. (These devices do not listen all the time to what is going on in your home.) Once recognized, however, your commands are sent by Internet to the cloud server where they are decoded and hopefully understood by the AI-trained software. The system will generate some audio response to you in your preferred language. These messages are saved. Google and Amazon have partially addressed some of privacy concerns, making it possible for a user to delete all messages. For example, I can say “Alexa erase all my messages for this week.” It is also possible to view all your messages that have been saved.
Nothing is fool proof or hacker immune, so if you have these devices, most come with a switch to turn off the microphones and cameras. Certainly do this at night.
Using Alexa
If you speak with a Scottish accent, Alexa may not understand you. Alexa’s speech recognition has been trained on a total of 8 languages including common English dialects. Scottish is not one of them. Inflection, intonation, pace, and in particular background noise make the “wake” word recognition challenging. The word “Alexa” may occur in ordinary conversation or on TV; however it is unlikely to sound like a command. If Alexa does seem to recognize your command, then you are on the way to discover some interesting and useful applications. A small number of people will find their experience with Alexa to be frustrating and error prone.
The part of Alexa that “learns” is in the cloud and this is how successfully it executes commands. For example, if you ask Alexa for some information, it then searches the Internet and might occasionally ask you if you are satisfied with the answer. Alexa will also learn by analysing rephrase requests. If Alexa’s initial response to a request is unsatisfying, the customer might cut the response off and rephrase the request. If the response to the rephrased request is allowed to play out, it’s a strong signal that the first request should have elicited the same response. The part of Alexa that is in your local device does not learn. Most likely you the user is being trained to speak so that Alexa can understand you. It seems that the more built-in microphones your device has, the better, so that the direction of the command source can be localized. The Amazon Echo Dot products have several.
Alexa can play music, read one of your Kindle books, play podcasts, tell you the weather, news and answer questions by searching the Internet ( It uses the Microsoft Bing search engine). It can command your smart home hub if you have one, set timers and reminders and play games. Since it lives in the Amazon universe it can access all your Amazon services such as Audible, Prime TV and your Amazon account. You can even order by voice. In my case, since I have Alexa on a Facebook Portal, Alexa can initiate WhatsApp video calls and start Zoom meetings.
Alexa functionality can be increased by adding “Skills.” These are like Apps you download onto your smart phone. There is no extra cost and so far, you can choose from 100,000 extra skills. A skill is basically a set of commands that are used to control some extra device or something that Alexa can already access. For example, relaxation skills involve sounds, guided meditation and phrases. For education you can add math skills, trivia skills, language and more. If you had a device with a screen in your kitchen, Alexa could display a recipe or if you need to learn how to do a small task, Alexa can find it on WikiHow and tell you. All this is “hands free.”
TV remote technology (The “Voice remote”)
Using a complicated TV remote control is a problem that needs a solution. We do not want to continue to struggle with a myriad small buttons, sometimes in the dark. It is just watching TV after all, not flying an airplane! With the large number of cable channels and programs on streaming services, searching can be frustrating.
There is, of course, “Smart TV” with built-in computer power and an Internet connection. Solving the problem of content source selection and search has not been successful with just a TV. Enter the “Remote control” and add microphone to it.
In 2017 The National Academy of Television Arts & Sciences announced that Comcast won an Emmy® Award for Technology and Engineering. The award specifically honours the work of the technology teams that developed the Xfinity X1 Voice Remote and the innovative software platform that powers it. This is what you will find in the Rogers Ignite TV remote.
The remote has a microphone; basically this is a push-to-talk function. The remote digitizes the audio and sends it to the TV set top box, which has the Internet connection. The TV shows this as text on the screen. This digitized message is sent to a “cloud” service for interpretation, which sends back a command to the set top box. The set top box may show you results of a search and be ready for additional voice or button commands. Everything and more that we used to get from “TV Guide” is now available.
The voice remote is a very successful product and using what is virtually a handheld microphone and text display of what you said eliminates many of the noise problems that the Alexa devices have. The push- to-talk feature means that the remote is not always on listening, hence no privacy concerns. You may have noticed that Rogers has been encouraging the use of the voice feature in recent TV advertisements.
Ron Miller
Reader, have you had experience with Alexa or voice remote? We are interested. Please send your comments to ouracademy2021@gmail.com.
First Person

The Road to Saulnierville: (An August Memoir)
Originally this photo essay was going to be about The Nova Scotia Lobster Wars of 2020. In researching the essay, I discovered that The Narwhal had already covered Act I of this story in a beautiful piece of online investigative photo/journalism – an easy and impressive readme first.
For the last five summers I’ve been spending my Augusts in Prince Edward Island, enjoying the beautiful beaches of the North Shore and the pastoral landscape of the Island’s interior. I was looking forward to catching up with friends, with summer reading, editing a backlog of raw photos, playing beach bocce and eating fresh clams, oysters, mussels, scallops and (yes) lobster.
When my former student, good friend and award-winning documentary cinematographer, John Hopkins, picked me up in Charlottetown, he told me he wanted me to go with him to get and hitch up a Viking trailer he had just bought. It was parked on a farm outside of Truro, NS. But first he wanted to head down to Saulnierville, where he was going to film Act II of his documentary about the lobster wars. He wanted me to shoot B-roll (background footage). Here’s a précis of the backstory.
The courts ruled in 1999 (the Marshall Decision) that all Mi’kmaq (pronounced and often spelled Mik’maw) Treaty Rights gave indigenous fishers the right to fish, pre-season, without a licence to secure a “moderate livelihood,” but stipulated that those rights can be interrupted for the sake of conservation. To the Maritimes Fisherman’s Union (MFU) and many of the nonindigenous independents, this was too much of a handicap. The federal government was subsidizing Mi’kmaq boats. Lobster boats cost well over a million dollars. One of the bands had built their own processing plant, and there is a large black market for summer catch. But the MFU’s main concern is that summer fishing would harm the spawning. The Mi’kmaq claim they mark the females and throw them back. But there are trust issues. In 2020, the nonindigenous fishers had cut and sunk their traps. There was a violent confrontation on the Saulnierville pier. On August 27, there was to be a procession of Mi’kmaq boats in a ceremony to exercise their treaty right to catch and sell fish to earn a moderate livelihood. The expectation was that there would be a repeat of the previous year’s confrontation and/or the MFU fishers would form a blockade. Mi’kmaq Chief Gary Prosper and Aboriginal First Nations Chief RoseAnne Archibald were present to offer support. This was going to be a big event. CBC Newsworld, Global, CTV, PBS, APTN and a CBS affiliate from Boston were all there to cover it. The only no-show was the MFU, no members, no boats, not even one union rep. Therefore, no confrontation. A non-event. It was a smart tactical move by the MFU. Chief Archibald took the opportunity to address AFN live on APTN and turned it into five minutes on CBC’s The National, and I got to take a lot of photos.
I’ve never seen seawalls so high. Note the highwater lines. At high tide, the sea can crest as much as 10 to 12 metres above that. The shoreline here is so different from up in Weymouth, our home base away from home. There, low tide leaves a sea of quick mud that can swallow you up if you are wrong-footed and step too close. At Mavillette, where Fundy opens to the sea, it uncovers 200 metres of beach that bocce players, and clammers, would die for.
The road from Saulnierville to Mavillette is a road rarely travelled by Upper Canada urbanistas like me. I saw two Ontario plates - an RV and a 2007 Ford Explorer pulling a trailer like the one we going to pick up near Truro. Most of the out-of-province plates were from New Brunswick, Quebec and Maine and I could count them on my fingers. There were Acadian flags everywhere, on lawns, houses, bumper stickers and boats. The Nova Scotia flag came in second and the Maple Leaf, third. Antiques and quilt making also make a strong contribution to the local GDP.
The winding road to Newport, near Truro, to pick up the trailer was revealing and very different, a smattering of prosperous farms – most of them owned by fisher families - amongst a sea of dilapidated barns and fallow fields and almost deserted villages.
Back on the North Shore of PEI, the sky opened to a glorious sunset and the breeze from the Gulf of St. Lawrence cooled hot summer air. High tide and a full moon brought with them a school of sea bass. We caught our limit – six. It was the first fish I’d caught in 65 years. The last one was a catfish in the Welland River. I was in grade seven.
Matt Segal

Bewilderment by Richard Powers
When I started to read Bewilderment by Richard Powers, whom I consider to be the most significant English language novelist writing today, I did so with some trepidation. I thought his previous novel, The Overstory, was an epic masterpiece and I wondered whether he could meet his own standard. I need not have worried.
Bewilderment is very different from The Overstory – much shorter and more intimate in its focus on the relationship between father and son; but it does continue with the theme of environmental destruction. Where The Overstory focussed on clear cutting of old growth forests, Bewilderment takes on human, animal and plant life on earth and beyond. The three main characters are Theo Byrne, an astrobiologist, his nine-year-old neuro-diverse son Robin, and wife and mother Alyssa, who has died in a car accident two years prior to the events of the novel. Theo and Robin struggle to come to terms with her death.
Theo’s work as an astrobiologist has him searching for life on other planets. Robin is a funny, loving boy who thinks and feels deeply, loves animals and spends hours creating intricate drawings of plant and animal life on this planet. He is also facing expulsion from school because he smashed a fellow student in the face. Robin has been diagnosed variously with OCD, ADHD and as autistic. Psychoactive drugs are recommended to control his violent mood swings. Extremely uncomfortable with the thought of administering these drugs to his young son, Theo searches for an alternative and finds it in an experimental process called DecNEF, with impressive results. DecNEF is a neuro-feedback process that enables Robin to connect with the mind of his mother by listening to tapes of conversations taken with her before her death. Alyssa was optimistic, joyous and loving, and also committed to saving life on this planet. She was a lawyer who specialised in animal rights. The transformation in Robin enables him to experience life with equanimity and joy. He looks at the world with reverence and empathy and in his transformation is contained one of the themes of this book – that if we have any hope of saving our planet, this is the transformation that we all need to undergo. Bewilderment is a powerful indictment of human greed, blindness and arrogance.
Theo’s search for “life” on other planets has so far proven to be futile. Yet in a universe comprising trillions of stars and billions of galaxies, the possibility that we are the only inhabited planet is extremely unlikely. The novel raises the audacious idea that we are not finding “life” because we do not know what we are looking for. We assume that life has to bear some resemblance to life as we know it and therefore can only exist on a planet whose conditions approximate ours. In an effort to bond with his son and appeal to Robin’s vivid imagination, Theo tells him “bedtime stories” about planets that are entirely different in every conceivable way from ours and whose lifeforms, therefore, are also different because they have adapted to these conditions. Moreover, these lifeforms are in hiding from humans because they have witnessed what humans are doing to their own planet and so do not want to be discovered.
One of the questions that underlies the second half of the novel is how will the neuro-feedback experiment that Robin undergoes end. This question heightens the narrative thrust and intensifies the profound impact of the ending. You will not soon forget that ending. Bewilderment is extraordinary in its ability to combine a portrait of that deepest of human bonds, the relationship between parent and child, and the most far-reaching exploration of the possibilities of life beyond our own in the universe. The book will break your heart and blow your mind.
Tanya Long
- Watch your email box for information from the Academy. Look for the sender ouracademy2021@gmail.com
-
A password is needed to access the member-only items on our website. Not sure what the password is? Send an email to ouracademy2021@gmail.com.
- Want a pdf version of the AQR? Click on the pdf symbol by the Academy Quarterly Review header.
Write for us
Would you like to write an item for our Academy Quarterly Review?
Do you have an idea for something you’d like to have included?
Or do you have a thought or comment that you would like to share?
If so, just send an email to ouracademy2021@gmail.com. The AQR is the voice of our Academy, and your input is welcomed.
