RECENT NEWS & UPDATES

Cleaning and Clarifying

This year’s project has been an exercise in exploring what “data” means in the humanities and thinking about research as data collection. It has left me pondering how, on one hand, initial research and data collection and the data cleaning process(es) can be viewed as a single, messy procedure, or on the other hand, as three separate efforts. Clearly, even thinking about data is a messy process, and I’ve learned to find some comfort in that concept, rather than impatience or shame. However, I’m also committed to the idea that the heart of my project is learning to produce a quality, peer-reviewable, re-usable data set and am committed to learning best practice for that task, messy though it may be.

When I wrote my last two posts, I thought I had overcome perfectionist waffling and was just about ready to send my data set to Open Context for review. It was very exciting. However, after some very helpful exchanges with my faculty mentors, I realized there was (quite a bit) more data cleaning I needed to do.

Essentially, I had been conflating categories and columns in the spreadsheet and needed clearer indications of what information I was recording. For instance, instead of one column noting whether a monument is a church, I now use a controlled vocabulary labeling it “sacred” or “secular” instead of filling it in with “yes” or “no.”  It’s a small change, but one that makes the monument data much clearer in answer to the research question of what the monument was used for. I’ve been plugging along in this manner, re-addressing the data in each column, and making sure that it makes sense and is somewhat consistent. (For instance, I had used “null” in some empty columns, “?” in some, and “*” in others for things I either didn’t have info for or wasn’t sure about; these are now being streamlined).

I spent much of the summer frustrated with how long my work is taking and how easily distracted I am in the wake of a long dissertation-writing process, yet I am incredibly grateful for faculty mentors and others in the Twitterverse who have provided the pep talks, explanations via email and the Commons, and answers to questions that keep me plugging along. A recent article that described similar experiences is Paige Morgan’s “Not Your DH Teddy Bear,” on emotional labor—what she defines as ““managing people’s emotions so that they can make effective project decisions”— in digital humanities.* She discusses it from the perspective of someone doing that labor, the hand-holding and encouraging of researchers, whereas at the moment, I’m the lucky recipient. I suspect many of the MSUdai faculty members will be able to relate to Morgan’s experience of having to provide things that aren’t necessarily digital skills or instructions over the course of this year—advice, encouragement, very detailed emails, emoji tweets. We started a DAC group on digital labor a while back, and this article is an interesting take on that topic.

On that note, I had planned to title this post, “Coulda, Shoulda, Woulda,” and list everything I would do differently next time. But then I spent some time articulating what I actually did, reading my field notebooks, and skimming all the documentation I produced while I worked. The truth is, I think I did the best I could at the time. This is not to say I wouldn’t do anything differently (I’ve made a list!), but it was a moment of clarity to realize that gaining data knowledge is, in fact, a learning process, and I simply must grant to myself the same leeway and patience I extend to my students when they’re learning something new.

My extended data cleaning process means I will need to re-address what I’ll have ready at the institute. I will definitely have data to visualize, but the Open Context peer review for the data set will still take some time, as will working with KORA, so I’ll have to rethink my order of operations. I’m hoping to use some institute time to get advice from Dan, Eric, and Catherine about data cleaning and workflow.

Morgan’s article also conveys patience in the face of mistakes, saying “messiness can be synonymous with complexity and, in that regard, can be generative rather than unproductive, and generative of engagement, rather than just tidying.” So in the spirit of generative and productive chaos, my next post will detail some of the things I know now that will benefit my next data collection project.

Updates:

  • Defended my dissertation. (Hooray!)
  • Crossed the hurdle of being afraid of messy data. (Bring on the chaos).
  • Had some productive emails and Commons conversations with my faculty mentors.
  • Explored other data sets in Open Context for comparison.
  • Got some great instructions for using and contributing to PeriodO for linked open data for time periods from Adam Rabinowitz on the DAC. (Read it here).
  • Applied for an ORCID number. (Required for PeriodO, and useful for all publications).
  • Cleaned the data. (Still cleaning, actually).
  • Started a 10-day mini refresher course on web design (HTML and tech terms) via Skillcrush.
  • Got a Twitter handle and Commons URL for a Cappadocia-related identity that are separate from my sorta-personal current accounts. (More on that at the institute!)

Goals for the Institute:

  • Continue curating/collecting/cleaning the highest-quality data set that I can and make sure it’s worthy of peer review.
  • Start working on ways to visualize that data during the institute.
  • Clarify an online identity to disseminate Cappadocian data (this data set, as well as other images and open access resources) via the DAC and other venues.

 

Reference: Paige Morgan, “Not Your DH Teddy-Bear; or, Emotional Labor is Not Going Away,” in Digital Humanities In the Library / Of the Library. DH+lib: Where the Digital Humanities and Librarianship Meet. ed. Caitlin Christian-Lamb, Zach Coble, Thomas Padilla, Caro Pinto, Sarah Potvin, John Russell, Roxanne Shirazi, and Patrick Williams, (July 29, 2016). http://acrl.ala.org/dh/2016/07/29/not-your-dh-teddy-bear/

A.L. McMichael
@ByzCapp

Wireframes!

With the help of the wonderful folks at MATRIX, I’m still trying my darnedest (and not succeeding) at getting the KORA WordPress plugin to work. Which means I’m still a little stalled. Since I don’t yet know how the plugin behaves and I’m new at significant WordPress fiddling, I don’t want to spend a whole lot of time with painstaking selection of a theme that won’t work. I’m probably doing something stupid with the plugin install process; for now I’m just waiting to figure out what it is.

So here we are, four days until I’m on the ground in East Lansing and I don’t have a whole lot to show for my project yet. I found myself in a major slump this morning (I also threw in a little dose of impostor syndrome for good measure), but I won’t be defeated. I’m making lists of all of the possibilities and problems I want to explore with Institute mentors and colleagues. I’m prepping files to upload to KORA, and I’m wireframing the major pages of the site to make it easier to select and modify a theme when the time is right. read more…

Kobo Toolbox in the field- limitations? and solutions.

This is a field report of efforts to develop a plan for low cost, digital data collection. Here’s what I have tried, what worked well, what did not and how those limitations were addressed.

First a description of the conditions. We live in two locations in Ecuador. The first is the field center established and currently run by Maria Masucci, Drew University. It has many of the conveniences needed for digital data collection, such as reliable electricity, surge protectors, etc. It does not have internet nor a strong cellular data signal. We are largely here only on weekends. During the week, we reside in rather cramped conditions in rented space in a much more remote location, where amenities (digital and otherwise) are minimal. There is limited cellular data signal (if you stand on the water tower, which is in the center of town and the highest point even though it is only one story tall, you can get a weak cellular data signal; enough for texts and receiving emails, but not enough for internet use or sending emails) and there is no other access to internet. We also take minimal electronic equipment into the field for the week (e.g. my laptop does not travel). So, everything needs to be set up prior to arrival in the field. The idea, therefore, is to largely use minimal electronic equipment in the field; I tried to use only one device (while also experimenting with others) for this reason. My device of choice (or honestly by default) is my iPhone 5s.

The central component of this attempt at digital data collection is Kobo Toolbox (see my earlier posts for more details… here, here, here and here), an open-source and web-browser based form creation, deployment and collection tool. Kobo Toolbox’s primary benefit is that, because it is browser-based, it is platform independent. You can use an iPad or an iPhone just as well as an Android device or a Mac or PC computer. This means that data can be collected on devices that are already owned or that can be bought cheaply (e.g., a lower level Android device v. an iPad). The form is created through their online tools and can create fairly elaborate forms with skip logic and validation criteria. Once the form is deployed and you have an internet connection, the user loads the form into a browser on your device. You need to save the link so that it can be used without a data connection. On my iPhone 5s, I simply saved the link to the home screen. A couple of quick caveats are important here. I was able to load the form onto an iPhone 4s (but only using Chrome, not Safari), but was unable to save it, so lost it once the phone was offline. I was unable to load the form at all on an iPhone 4 (even in Chrome). Therefore, although ideally the form should work in any browser, the reality is that it makes use of a number of HTML5 features that are not necessarily present in older browsers. Of course, as time goes on, phones and browsers will incorporate more HTML5 components and therefore, this will be less of an issue.

Once the form is deployed and saved on your device, you can collect data offline. When the device comes back online, it will synchronize the data you have collected with Kobo’s server (note that you can install Kobo Toolbox on a local server, but at your own risk). Then, you can download your data from their easy-to-use website.

For the first week, I set up a basic form that collected largely numerical, text and locational data. We were performing a basic survey and recording sites. Outside of our normal methods of recording sites and locations, I recorded sites with Kobo Toolbox in order to determine its efficacy under rather difficult “real-world” conditions. I collected data for 5 days and Kobo Toolbox worked like a dream. It easily stored the data offline and, once I had access to a data signal, all the queued data was quickly uploaded. I had to open the form for this to occur. I was unable to upload with a weak cellular data signal. It only completed uploaded once I had access to WiFi (late on Friday night). However, it synchronized nicely and I was able to then download the data (as a CSV file) and quickly pull it into QGIS.

The single biggest problem that I discovered in the field was that I needed to be able to see the locations of the sites recorded with Kobo Toolbox on a dynamic map. Although Kobo Toolbox recorded it nicely, you cannot see a point on the map, so I had to use another method to visualize what I was recording. The only way to see the recorded data is by downloading from the Kobo Toolbox, but a data connection is required. You can see and edit the data only if you submit as a draft. Once the data is submitted however, you cannot edit it in the field (this was true of other field collection systems that I have used, e.g. Filemaker Go). Yet, I still needed a way to visualize site locations (so I could determine distances, relationships to geographic features and other sites, etc. while in the field).

For this purpose I used iGIS, an free IOS app (see below for limitations; subscriptions allow additional options). Although this is an IOS app with no Android version, there are Android apps that function similarly. With this app, I was able to load my own data as shapefiles (created in QGIS) of topographic lines, previous sites and other vector data, as well as use a web-based background map (which seemed to work, even with very minimal data connection). Raster data is possible, but it needs to converted into tiles (the iGIS website suggests MapTiler, but this can also be done in QGIS). Although you can load data via multiple methods (e.g., wifi using Dropbox) I was able to quickly load the data using iTunes into the app. Once this data is in the app on the phone, an internet connection is no longer needed. As I collected data with Kobo Toolbox, I also collected a point with iGIS (with a label matching the label used in Kobo), so that I could see the relationship between sites and the environment. Importantly, I was also able to record polygons and lines, which you cannot do with Kobo Toolbox. Larger sites are better represented as polygons, rather than points (recognizing the c. 5-10m accuracy of the iPhone GPS). The collection of polygons is a bit trickier, but it works. Polygons and lines can later be exported as shape files and loaded into a GIS program. By using equivalent naming protocols between Kobo Toolbox and iGIS, one can ensure that the data from the two sources can be quickly and easily associated. The greatest benefit of iGIS is seeing the location of data points (and lines and polygons) in the field and being able to load custom maps (vector and raster) into the app and be able to view without a data connection. Although this is possible with paper maps (by printing custom maps, etc.), the ability to zoom in and out increases the value of this app greatly. Getting vector data in and out of iGIS is quite easy and straightforward. iGIS is limited in a couple of ways; nearly all of which are resolved with a subscription, which I avoided. Here’s a brief list of limitations:
– All points (even on different layers) appear exactly the same (same size, shape, color; fully editable with subscription). This can make it very difficult to distinguish a town from a site from a geographic location
– Like points, all lines and polygons appear the same (also remedied with a subscription). I was particularly difficult to tell the difference between loaded the many uploaded topolines and collected polygons.
– Limited editing capabilities (can edit location of points, but not nodes of lines; can edit selected data).
– Limited entry fields ( remedied with subscription, but, perhaps this is not necessary, if it can be connected to data collected with Kobo Toolbox).
– Unable to collect “tracks” as with a traditional GPS device (Edit- OK, so I was wrong about this! You can collect GPS tracks in iGIS, even though this is not as obvious as one might like).

The final limitation of iGIS was not something that was originally desired, but became incredibly useful in collecting survey data, especially negative results (positive results were recorded with the above). Our survey employed a “stratified opportunistic” strategy. We largely relied upon local knowledge and previous archaeological identification to locate sites, but also wanted to sample the highest peaks, mid-level areas and valley bottoms. In order to do this, we also used three different strategies. First, we utilized knowledgeable community members to take us to places they recognized as archaeological sites. Second, we followed selected paths (also chosen by local experts). Third, we chose a few points (especially in the higher peaks c. 200-300 meters above the valley floor). One of the most important aspects of this type of survey was recording our “tracks” so that we would know where we had traveled. This is commonly done with GPS units, but I was able to collect these using MotionX-GPS with the iPhone already in use. The GPS “tracks” (which are really just lines) as well as “waypoints” (i.e., points) were easily exported and loaded into QGIS. This allows for an easily collected data about where surveys traveled, but did not find archaeological sites. (Edit- Note that you can use iGIS for this function! MotionX GPS is not needed, therefore. It is great for recording mountain biking and hiking, however!).

One final comment will suffice here. I just discovered a new app that may be able to replace iGIS. QField is specifically designed to work with the open source GIS program QGIS. Although it is still new and definitely still in development, it promises to be an excellent open source solution for offline digital data collection- though limited to Android devices!

Digital Archaeology in Nevada

During July I have concentrated on building the objects for the Cave Rock Website – building the timeline with historic photos and the 3D representation of Cave Rock before 1849.

As our institute project gets closer to launching, I have realized that my exposure to open source programs and public archaeology has expanded throughout my workplace. One of the first things I did when I got back from MSU last August was to present a synopsis of open source software, and the goals of public archaeology to my peers.

A few examples of how we put this to use:

We are using Trello to process Nevada’s FHWA Bridge Program Comment. In Trello, we placed NDOT’s entire bridge inventory out for review and comment on exempting certain post-1945 bridges from further Section 106 review. We were able to include the inventory, pictures of bridges types, and a pictorial glossary of the bridge elements. The comment period has now closed and we are compiling the comments for ACHP review. Trello made it much easier to share information and supporting media, and is expected to result in exempting over 1600 mass-produced bridges statewide.

FHWA Brodge Program Comment on Trello

FHWA Bridge Program Comment on Trello

Using SketchFab, we shared 3D objects of late Pleistocene grasshoppers from an early food cache with entomologists nationwide for study. The grasshoppers represent a poorly understood subspecies in the Great Basin. The archaeologist working on the analysis, Evan Pellegrini, has made many new research contacts and now has more collaborative opportunities for his project.

Several years ago, one of our custom built programs became unsustainable and was abandoned. Using OpenRefine, we are reformatting the data stored in that program to relate it back to spatial data in GIS. By recovering the legacy data, we now have robust access to thirty years of cultural field data reports throughout Nevada.

The big takeaways have been expanded research, a wider audience for the data, and the recovery of important legacy data. Thanks Digital Archaeology!

Reducing visual load and visualizing geo-information

Since the last post, I’ve been working with Mapbox and Leaflet. I decided to branch my repo to work on a non-WebGL -based webmap. I am focusing on existing map interaction tools such as marker clustering that reduce the ‘visual load’ in MINA. As I discussed in my previous post, there were several issues with the webmap. In addition to a noticeable lag when interacting with the map, the density of markers made it difficult to make visual sense (ie. actually see spatial patterns). I want to use webmaps to gain a better understanding of archaeological phenomena, and clustering techniques seem a good next step to facilitate deeper insights into archaeological data.

Here is the first look using markerClustering (left). A lot easier on the eyes than the early iteration (right).

Using markerClustering (left) and individual markers (right)

Using markerClustering (left) and individual markers (right)

I’ve reverted to Mapbox Classic + Leaflet since this is well developed. There is still more to do, like enable popups on click and other data navigation tools like filtering, as I had originally proposed. For example, wouldn’t it be great to see where in India particular archaeologists were most active compared with others, and examine how reliable was the geocoding?

While working with the WebGL map, I noticed there were two markers located in Europe. This is clearly problematic since all the investigations are supposed to be in India! So I returned to the csv and re-examined the geocoding. Sure enough, two records (geocoded using APIs that I have previously written on here and here) did indeed have geographic coordinates out of India. Actually, the records (and eight others) had incorrect administrative units and this might be the reason they were incorrectly geocoded. I missed them when I did the initial cleaning in OpenRefine. Since I am knee-deep in correcting these, I thought I would re-visit the ‘precision’ column in the csv.

Most records (2134 of 2273) in my csv (on github here) have a ‘3’, which means the coordinates point to a “region”. Records (109 of 2273) with a ‘5’ point to a “city or municipality” and those with a ‘9’ point to “rooftop”. There are a few records (8) without precision. This is somewhat useful in assessing how much I trust the geocoding. To clarify this better, here is a breakdown based on the control set I created:

Spreadsheet shows comparision between coordinate values manually collected (Nlat, Nlong) and those from batch geocoding

Spreadsheet shows comparision between coordinate values manually collected (Nlat, Nlong) and those from batch geocoding

Differences between coordinates I manually assigned (using Google, etc) and that the batch geocoder returned range from 0.01 to 6 degrees. That is quite a range. For reference, one degree (latitude or longitude) is ~ 110-112 km. If I arbitrarily take two degrees as my cut off, it would mean that differences over 220-224 km are pretty ‘out there’.

Most of the above records fall well within one degree which is good. But where values differ, they vary by several hundred km. The geocoder can be highly precise, but it can also be inaccurate. I have an example of this here:

Image shows two maps. Google map (left) shows location of "Hasanpura", north of the Narmada River. Inset (right) shows where the archaeologist marked the same named place, south of the Narmada River.

Image shows two maps. Google map (left) shows location of “Hasanpura”, north of the Narmada River. Inset (right) shows where the archaeologist marked the same named place, south of the Narmada.

Which is correct? How much does a 3 mean vs 5, vs 9 in terms of reliability? In short, I have a lot more work to do before I can offer MINA as anything other than a ‘proof-of-concept’. But things are moving along on different tools that enable data navigation. I hope to have another update before the Institute in mid-August.

Artifacts Selected for Virtual Museum

This month I’ve made the final selections for the seventeen artifacts that will be featured in the virtual museum when it launches. The artifacts include four socketed bone points, two elements of composite fish hooks, three fish gorges, a leister prong, a sawfish tooth hafted point, a drilled and hafted shark tooth, two bone beamers, a drilled turtle shell, a possible wooden pole, and a wooden shaft from inside a socketed bone point.  There will be photographs and information about the other artifacts recovered at the site, but the seventeen listed above will have individual text content and multimedia.  I’ve drafted the text to accompany all these artifacts and drafted the contextual information about the site, the archaeological collection, and the methods used to study the site and artifacts.  I’ve spent a lot of time trying to tell a story with the objects, rather than providing dry facts about the tools.  It’s been a fun challenge to break the information up into small segments while remaining informative and cohesive.

I spoke to several members of the NPS web team last week and my proposal is moving forward to be presented to the web editorial board.  The NPS web team has been very helpful and supportive.  I’ve received some good feedback from them and from some co-workers.  I’ll brush up on some training while I’m waiting to hear about my proposal (which should be discussed at the meeting later this week).  I’m really looking forward to getting to work on the actual website!  I know it’s going to be a major push to have it ready for the Institute.  I’m hoping that all the work I’ve done on the front end for the content and organization will pay off when I’m building the site.

Copyright, fair use, and the digital repository (part 1)

While I wait for the KORA WordPress plugin’s completion to dig into front end development (still reserving the PHP option mentally since the clock is ticking), I’ve been deep into research on copyright and intellectual property implications of this kind of project. I’ve been thinking about this all for a while. As I’ve begun ingesting digital objects, I’ve been confronted with that pesky “rights” field. After my dead-end research in February, I just included some default text that essentially says “contact Archivist for details,” knowing full well that it was a total cop-out.

So this week I renewed my quest. And I also kept bringing myself back to the fact that the product of my Institute project isn’t only the website that comes out at the end; it’s all of this documentation. Lack of clarity on rights issues for this type of data seems not to be uncommon. So hopefully other similar institutions will find these links helpful. Since copyright laws for state records vary among states, much of this will be Virginia-centric, but links to specific information for other states are easy to find.

A refresher on my scenarios

In this repository, I’ll be including several types of media (or objects): PDFs of gray literature, digital versions of photographs, datasets, and also scanned maps and drawings. All of this is held by our institution. But does that mean we have the right to make digital versions available for download? If we wanted to use Creative Commons licensing on this material, do we have that authority? My hunch (that’s beginning to bear out) is that, well, it depends. Here are the four categories of reports, as i see them, for illustrative purposes:

  1. Reports written by my agency
    This one is easy. Copyright is held by the Commonwealth of Virginia for material created by an agency or its employees.
  2. Reports written *for* my agency (usually by an outside consultant) with agency funds
    At first I figured that if the client is the agency, that it’s the agency who would hold the copyright. Right? Not so fast. It doesn’t appear that this kind of technical report falls under the Work Made for Hire provision of the 1976 Copyright Act (for a great explanation, see this post). The subsequent hook for rights considerations will be to figure out if these are considered government documents.
  3. Reports written by others for compliance with environmental review regulations (federal and/or state) and submitted to my agency
    The original author is usually a consultant. The “client” is either another private entity or a government agency. So, we’ll assume that the copyright is held by the consultant unless the report says otherwise, but these are definitely going to be considered government documents.
  4. “Courtesy” reports
    Since these were (and continue to be) donated to us outside of any specific requirement and we don’t track the donor, I’ll assume that the copyright remains with the author and that they aren’t “government documents,” although there’s a chance that just being a part of our Archives makes something a government document. Oh, brother.

Things I’ve learned from this research
The Commonwealth of Virginia doesn’t even have a formal Copyright and Intellectual Property policy. A 2009 statute essentially mandated that one be created, but it doesn’t appear to exist.

Our Archives should implement more controls on verifying copyright holder and usage rights for each document, explicitly allowing our agency to create and publish digital versions.

 

In the end, I hope to be able to classify all or most of this material according to RightsStatements.org and place standardized labels on each digital object with links to the URI for each. Ideally, we can work this into the beginning of the archives accession workflow in the future instead of trying to verify the information at the end of the line.

Several people have mentioned that there may simply not be clear legal precedent for some of this information. That’s cold comfort (and being part of a legal precedent isn’t particularly high on my bucket list), but I can see it as a possibility. I’m much less terrified of the scary legal implications of all this than I was back in February, and hopefully my bosses will feel the same way.

At this point, I’ve made some progress on the copyright front. Next up is fair use. I’ll follow up with another post as things coalesce.

 

Helpful links from my quest

University of Texas Arlington: Copyright and Fair Use
A great overview of copyright with lots of content.

Harvard State Copyright Resource Center: Virginia
The mother lode of helpful state-specific information

Code of Virginia § 2.2-2822. Ownership and use of patents and copyrights developed by certain public employees; Creative Commons copyrights.
tl;dr: “We need a policy on this.”

U.S. Copyright Office Fair Use Index
I plan to spend some quality time with this over the next few days. Stay tuned for Part 2.

I’m also looking a variety of digital repositories’ FAQs for rights clues, like this http://libguides.asu.edu/digitalrepository/rights/.

 

**If you’re reading this and you see any errors or incorrect assumptions on my part, please please let me know in the comments. I also welcome links!**

Acknowledgments:
J. Albert Bowden II
Kathy Jordan at the Library of Virginia

Promoting Taraco Landscapes

Greeting from Bolivia! This post will be short and not too technological as I am in the midst of my field season and have very limited access to the Internet. I’ve spent most of this month out at the site of Chiripa on the northern shores of Lake Titicaca mapping the landscape around the famous Formative period (1500-300 BC) mound. The view is spectacular and I have many pictures similar to this one.

Taraco Work

Local Chiripa workers holding stadia rods for Total Station mapping of Lake Titicaca plain north of the site of Chiripa, Bolivia. Snow-capped Andean Eastern Cordillera in the background.

In addition to some great progress on our research about the landscape, we’ve had many, many conversations with people about tourism in the area and how to get more visitors out to the fascinating and beautiful Taraco Peninsula. Most tourists (national and foreign) only get as far as the most famous Bolivian site of Tiwanaku, but the Taraco sites are only about an hour away. In addition to the sites themselves, there are three small museums awaiting visitors. The current Mayor of the municipality is very interested in archaeology and promoting tourism in the area, and he enthusiastically supported our project this year. One of the primary problems (aside from the need for more infrastructure to support tourism, such as improving the road) is that people do not know about the area or the sites. There are plenty of tour companies in La Paz that could bring people out to the peninsula but they simply don’t know what is on offer. Given these many conversations and thinking about my DAI project, I think that I will shift it to be less “academic” about exploring Taraco landscapes and more informational so that it could perhaps be useful to the Mayor’s office and communities to help promote awareness of the area. The information that I plan to add to my developing sites once I’m back in the States with regular Internet access and during the week back at MSU will focus on the primary archaeological sites and the museums. I’m also going to work on making it bilingual! This could be a better starting point to develop the site with additional information about landscapes, people, as I had originally envisioned.

Making Drafts Public

This month, after one month off the MSUDAI blog radar, I am making my draft public. So, you’ll hear from me twice this month. For this post I will briefly reflect on making drafts public, while making my project website public.

Today I turned Maintenance Mode off on http://pocumtuckheritage.org which is terrifying and exciting. As I write this it looks decent but I am on the 4th iteration of where to put the historical excerpts that my product’s structure depends upon so they are not even visible right now. Furthermore, I am in the middle of building a WordPress theme which I will soon upload to the server and then the website is gonna look really messy for a long time, in public.

Clearly I thought through my project, tried various things, and put a lot of hours into getting to this point, so what am I so afraid of? An imperfect, unfinished product could reveal things about your work style, about what comes easily and what does not, or poorly-worded versions of your argument. And what if people know I have not finished this project yet, 11 months in? Or what if they find out I am a imposter and don’t really know that much after all?

But everyone who has written or built anything knows what a work-in-progress represents, and we can often learn from one anothers’ process. So, having some people working in public can be really helpful to the community. Furthermore, no one really cares that you are not perfect, even your employer – they care if you do good (excellent?) work. I know some people have been able to work more or less in public much earlier, but for me this is a big emotional leap. Everyone will have their turn. I am excited to see the other projects rolled out to the public and be able to help each other realize what we envisioned!

Cave Rock Story….

It’s summer and thinks are pulling together very slowly. While the photos have provided a lot of historic detail, the photo drape would not have sufficient detail to convey the story as hoped. On to Plan B! The 3D object of Cave Rock will be a recreation, with the photos supplementing the object as a timeline “essay”. I am finding that the historic photos have captured public interest all on their own, while the recreation without the photo drape is just as interesting to the public. I have been very fortunate to have the Tribal Historic Preservation Officer for the Washoe Tribe of California and Nevada to talk to as the project developed.

The glass negatives that were scanned for the project are all now returned to the Nevada State Library and Archives, which has been their home for many years. At the library’s request, all the scans were forwarded as JPEGs along with a spreadsheet with information suitable for creating a more permanent archive. High quality TIFFs will remain housed within the Nevada Department of Transportation’s Photogrammetry archive and will be available to the public for the first time. Although the basic work is done, more time will be required to identify the subjects and date of the negatives, and will be a separate project.