I prepared a dataset from Twitter user with user_id_str 25073877 containing a total of 123 public Tweets and corresponding metadata between 15 February 2017 06:40:32 and 15 March 2017 08:14:20 Eastern Time.
I have archived 3,603 public Tweets from_user_id_str 25073877. I looked at the sources of those Tweets, and the top 50 most frequent terms per main source (iPhone and Android).
I looked at a dataset of all the Tweets from DJT's account timestamped between 04/11/2016 14:56 and 13/02/2017 22:30 (Washington DC time) in order to get an idea of the change in follower numbers (user_followers_count) in that period of time.
A quick word count of DJT's Tweets since inauguration day unitl 06/02/2017 07:07:55 AM Eastern Time.
I made a collection of Tweets tagged with #TheDataDebates. What's the use?
I made a collection of Tweets tagged with #dhcshef published publicly between Monday September 05 2016 at 17:54:58 +0000 and Saturday September 10 2016 at 23:37:06 +0000 and I write a little bit about it.
Here's an edited list of the top 50 most frequent terms extracted from a cleaned dataset comprised of 10,721 #WLIC2016 Tweets published between Monday 15/08/2016 10:11:08 EDT and Wednesday 17/08/2016 07:16:35 EDT.
I have looked at the text from 4,945 Tweets published with #WLIC2016 since 14/08/2016 until 15/08/2016 11:16:06 (EDT, Columbus Ohio time).
"The BBC's Great Debate" was broadcasted live in the UK by between 20:00 and 22:00 BST. I collected some of the Tweets tagged with #BBCDebate using a Google Spreadsheet. I have shared a dataset and share some insights from the data here.
I have now shared a spreadsheet containing an archive of 1,005 @StrongerIn Tweets publicly published by the queried account between12/06/2016 13:34:35 and 21/06/2016 13:11:34 BST.
As the date to vote in person approaches, I collected and shared a dataset of tweets published by the official Leave campaign Twitter account, @vote_leave, between 12/06/2016 09:06:22 - 21/06/2016 09:29:29 BST. The dataset contains 1,100 tweets.
[Revised]. If there is content overload, whose responsibility is it to filter, and is filtering, as we have traditionally defined it, still really possible under the current infrastructures?