MetaPatentTactilemessagesinanextendedrealityenvironment

Patent:Tactilemessagesinanextendedrealityenvironment

PublicationNumber:20230393659

PublicationDate:2023-12-07

Assignee:MetaPlatformsTechnologies

Abstract

Techniquesforsendingandreceivingtactilemessages(e.g.,hapticemojis)inanextendedrealityenvironmenttofacilitatetouchcommunicationbetweenusers.Inoneparticularaspect,anextendedrealitysystemisprovidedhavingahead-mounteddevicewithadisplaytodisplaycontenttoafirstuser,sensorstocaptureinputdata,processors,andmemoriesaccessibletotheprocessors,thememoriesstoringinstructionsexecutablebytheprocessorstoperformprocessingincluding:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser,extractingfeaturesfromtheinputdatathatcorrespondtoanelectroniccommunication,identifyinganemojifromalexiconofemojisbasedontheextractedfeatures,obtainingdigitalassetsfortheemoji,wherethedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput,andtransmittingthedigitalassetstoadeviceofaseconduser.

Claims

Whatisclaimedis:

1.Anextendedrealitysystemcomprising:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser;oneormoreprocessors;andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser;extractingfeaturesfromtheinputdatathatcorrespondtoanelectroniccommunication;identifyinganemojifromalexiconofemojisbasedontheextractedfeatures;obtainingdigitalassetsfortheemoji,whereinthedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput;andtransmittingthedigitalassetstoadeviceofaseconduser.

2.Theextendedrealitysystemofclaim1,whereintheextractingthefeaturescomprises:determiningcharacteristicsoftheinputdata,andidentifyingpatternswithintheinputdatathatcorrespondtoakeyorattributesofelectroniccommunicationbasedonthecharacteristics,thekeyorattributesbeingtheextractedfeatures;andwhereintheidentifyingtheemojicomprises:constructingaqueryusingtheextractedfeaturesasparametersofthequery,andexecutingthequeryonthelexiconofemojis.

3.Theextendedrealitysystemofclaim1,whereinthehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.

4.Theextendedrealitysystemofclaim1,whereinthedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.

5.Theextendedrealitysystemofclaim1,whereintheprocessingfurthercomprisesobtainingadditionalinformationbasedontheemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andtransmittingtheadditionalinformationtothedeviceoftheseconduser.

6.Anextendedrealitysystemcomprising:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser;oneormoreprocessors;andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser;predictingahapticemojiorahapticsignalbasedontheinputdataandmodelparameterslearnedfromhistoricalinputdataandcontextdata;andtransmittingthehapticsignalordigitalassetsforthehapticemojitoadeviceofaseconduser.

7.Theextendedrealitysystemofclaim6,whereintheprocessingfurthercomprisesobtainingthedigitalassetsforthehapticemoji,andthedigitalassetscomprisethehapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput.

8.Theextendedrealitysystemofclaim6,whereinthehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.

9.Theextendedrealitysystemofclaim6,whereinthedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.

10.Theextendedrealitysystemofclaim7,whereintheprocessingfurthercomprisesobtainingadditionalinformationbasedontheemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andtransmittingtheadditionalinformationtothedeviceoftheseconduser.

12.Theextendedrealitysystemofclaim11,whereintheparametersoftheoneormoreactuatorsignalsincludeinformationonpressure,temperature,texture,sheerstress,time,space,oracombinationthereof.

13.Theextendedrealitysystemofclaim11,whereintheprocessingfurthercomprisespriortogeneratingtheoneormoreactuatorsignals,adjustingtheparameterinformationontheinterval,pitch,amplitude,oracombinationthereofforthehapticsignalinaccordancewithpreferencesofthefirstuser.

14.Theextendedrealitysystemofclaim12,whereintheprocessingfurthercomprisespriortogeneratingtheoneormoreactuatorsignals,adjustingtheparameterinformationonthepressure,temperature,texture,sheerstress,time,space,oracombinationthereoffortheoneormoreactuatorsignalsinaccordancewithpreferencesofthefirstuser.

15.Theextendedrealitysystemofclaim11,whereintheprocessingfurthercomprisesobtainingadditionalinformationbasedonanemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andthehapticoutputisgeneratedwithvirtualcontent,whichisgeneratedandrenderedbythehead-mounteddeviceinanextendedrealityenvironmentdisplayedtothefirstuserbasedontheadditionalinformation.

16.Theextendedrealitysystemofclaim11,whereinthehapticsignalispredictedbasedoninputdataandmodelparameterslearnedfromhistoricalinputdataandcontextdata,andtheinputdataiscapturedfromahead-mounteddeviceoftheseconduser.

17.Theextendedrealitysystemofclaim11,whereinthehapticsignalispartofdigitalassetsobtainedforanemojiidentifiedfromalexiconofemojis.

18.Theextendedrealitysystemofclaim17,whereintheemojiisidentifiedfromalexiconofemojisbasedonextractedfeaturesfrominputdatathatcorrespondtoanelectroniccommunication,andtheinputdataiscapturedfromahead-mounteddeviceofaseconduser.

19.Theextendedrealitysystemofclaim17,whereinthedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.

20.Theextendedrealitysystemofclaim17,whereinthehapticsignalfortheemojiistransmittedtothehead-mounteddeviceofthefirstuser.

Description

CROSS-REFERENCETORELATEDAPPLICATION

Thepresentapplicationisanon-provisionalapplicationofandclaimsthebenefitandpriorityunder35U.S.C.119(e)ofU.S.ProvisionalApplicationNo.63/365,689,filedJun.1,2022,theentirecontentsofwhichisincorporatedhereinbyreferenceforallpurposes.

FIELD

Thepresentdisclosurerelatesgenerallytohapticcommunicationinanextendedrealityenvironment,andmoreparticularly,totechniquesforsendingandreceivingtactilemessages(e.g.,hapticemojis)inanextendedrealityenvironmenttofacilitatetouchcommunicationbetweenusers.

BACKGROUND

BRIEFSUMMARY

Techniquesdisclosedhereinrelategenerallytohapticcommunicationinanextendedrealityenvironment.Morespecificallyandwithoutlimitation,techniquesdisclosedhereinrelatetosendingandreceivingtactilemessages(e.g.,hapticemojis)inanextendedrealityenvironmenttofacilitatetouchcommunicationbetweenusers.Hapticemojisorreactionsaretactilemessagesthatcanbesentandreceivedthroughoutthedaywithawearabledevice(e.g.,hapticgloveorwristband).Eachhapticemojiorreactionmaybeaccompaniedbyaudioand/orvisualcomponentstohelptrainauseronthehapticsignals.Thetactilemessagescanbesentthroughtraditionaluserinterfaces,hapticfirstinterfaces,ormoreexpressivegesturessuchasahandwave,whereinthisexampletherecipientmayfeelahapticpatterntomimicawavemotion.

Invariousembodiments,anextendedrealitysystemisprovidedthatincludes:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser;oneormoreprocessors,andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser,extractingfeaturesfromtheinputdatathatcorrespondtoanelectroniccommunication,identifyinganemojifromalexiconofemojisbasedontheextractedfeatures,obtainingdigitalassetsfortheemoji,whereinthedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput,andtransmittingthedigitalassetstoadeviceofaseconduser.

Insomeembodiments,theextractingthefeaturescomprises:determiningcharacteristicsoftheinputdata,andidentifyingpatternswithintheinputdatathatcorrespondtoakeyorattributesofelectroniccommunicationbasedonthecharacteristics,thekeyorattributesbeingtheextractedfeatures,andtheidentifyingtheemojicomprises:constructingaqueryusingtheextractedfeaturesasparametersofthequery,andexecutingthequeryonthelexiconofemojis.

Insomeembodiments,thehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.

Insomeembodiments,thedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.

Insomeembodiments,theprocessingfurthercomprisesobtainingadditionalinformationbasedontheemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andtransmittingtheadditionalinformationtothedeviceoftheseconduser.

Invariousembodiments,anextendedrealitysystemisprovidedthatincludes:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser,oneormoreprocessors,andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser,predictingahapticemojiorahapticsignalbasedontheinputdataandmodelparameterslearnedfromhistoricalinputdataandcontextdata,andtransmittingthehapticsignalordigitalassetsforthehapticemojitoadeviceofaseconduser.

Insomeembodiments,thehapticemojiispredictedandtheprocessingfurthercomprisesobtainingthedigitalassetsforthehapticemoji,andthedigitalassetscomprisethehapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput.

Insomeembodiments,theparametersoftheoneormoreactuatorsignalsincludeinformationonpressure,temperature,texture,sheerstress,time,space,oracombinationthereof.

Insomeembodiments,theprocessingfurthercomprisespriortogeneratingtheoneormoreactuatorsignals,adjustingtheparameterinformationontheinterval,pitch,amplitude,oracombinationthereofforthehapticsignalinaccordancewithpreferencesofthefirstuser.

Insomeembodiments,theprocessingfurthercomprisespriortogeneratingtheoneormoreactuatorsignals,adjustingtheparameterinformationonthepressure,temperature,texture,sheerstress,time,space,oracombinationthereoffortheoneormoreactuatorsignalsinaccordancewithpreferencesofthefirstuser.

Insomeembodiments,theprocessingfurthercomprisesobtainingadditionalinformationbasedonanemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andthehapticoutputisgeneratedwithvirtualcontent,whichisgeneratedandrenderedbythehead-mounteddeviceinanextendedrealityenvironmentdisplayedtothefirstuserbasedontheadditionalinformation.

Someembodimentsofthepresentdisclosureincludeacomputer-implementedmethodcomprisingpartorallofoneormoremethodsand/orpartorallofoneormoreprocessesdisclosedherein.

Someembodimentsofthepresentdisclosureincludeasystemincludingoneormoredataprocessors.Insomeembodiments,thesystemincludesanon-transitorycomputerreadablestoragemediumcontaininginstructionswhich,whenexecutedontheoneormoredataprocessors,causetheoneormoredataprocessorstoperformpartorallofoneormoremethodsand/orpartorallofoneormoreprocessesdisclosedherein.

Someembodimentsofthepresentdisclosureincludeacomputer-programproducttangiblyembodiedinanon-transitorymachine-readablestoragemedium,includinginstructionsconfiguredtocauseoneormoredataprocessorstoperformpartorallofoneormoremethodsand/orpartorallofoneormoreprocessesdisclosedherein.

BRIEFDESCRIPTIONOFTHEDRAWINGS

FIG.1isasimplifiedblockdiagramofanetworkenvironmentinaccordancewithvariousembodiments.

FIG.2Aanillustrationdepictinganexampleextendedrealitysystemthatpresentsandcontrolsuserinterfaceelementswithinanextendedrealityenvironmentinaccordancewithvariousembodiments.

FIG.2Banillustrationdepictinguserinterfaceelementsinaccordancewithvariousembodiments.

FIG.3Aisanillustrationofanaugmentedrealitysysteminaccordancewithvariousembodiments.

FIG.3Bisanillustrationofavirtualrealitysysteminaccordancewithvariousembodiments.

FIG.4Aisanillustrationofhapticdevicesinaccordancewithvariousembodiments.

FIG.4Bisanillustrationofanexemplaryvirtualrealityenvironmentinaccordancewithvariousembodiments.

FIG.4Cisanillustrationofanexemplaryaugmentedrealityenvironmentinaccordancewithvariousembodiments.

FIG.5isasimplifiedblockdiagramofasocialcommunicationplatforminaccordancewithvariousembodiments.

FIG.6Aisasimplifiedblockdiagramillustratingasocialcommunicationsystemforconvertinginputdatatohapticoutputusingalexiconofemojisinaccordancewithvariousembodiments.

FIG.6Bisanillustrationofdigitalassetsforalexiconofemojisinaccordancewithvariousembodiments.

FIG.6Cisanillustrationofdigitalassetsforalexiconofemojisinaccordancewithvariousembodiments.

FIG.7isaflowchartillustratingaprocessforconvertinginputdatatohapticoutputusingalexiconofemojisinaccordancewithvariousembodiments.

FIG.8isasimplifiedblockdiagramillustratingamachine-learningpredictionsysteminaccordancewithvariousembodiments.

FIG.9isaflowchartillustratingaprocesstopredicthapticemojisforconveyingatouchmessageinaccordancewithvariousembodiments.

FIG.10isasimplifiedblockdiagramillustratingasocialcommunicationsystemforsupplementingahapticsignalwithadditionalinformationtofacilitateauserlearningahapticoutputinaccordancewithvariousembodiments.

FIG.11isaflowchartillustratingaprocessforsupplementingahapticsignalwithadditionalinformationtofacilitateauserlearningahapticoutputinaccordancewithvariousembodiments.

FIG.12isasimplifiedblockdiagramillustratingasignalgeneratorforoperatingcutaneousactuatorstodeliverhapticoutput(tactilefeedback)toauserinaccordancewithvariousembodiments.

FIG.13isaflowchartillustratingaprocessforgeneratingahapticoutputinaccordancewithvariousembodiments.

DETAILEDDESCRIPTION

Inthefollowingdescription,forthepurposesofexplanation,specificdetailsaresetforthinordertoprovideathoroughunderstandingofcertainembodiments.However,itwillbeapparentthatvariousembodimentsmaybepracticedwithoutthesespecificdetails.Thefiguresanddescriptionarenotintendedtoberestrictive.Theword“exemplary”isusedhereintomean“servingasanexample,instance,orillustration.”Anyembodimentordesigndescribedhereinas“exemplary”isnotnecessarilytobeconstruedaspreferredoradvantageousoverotherembodimentsordesigns.

INTRODUCTION

Inanotherexemplaryembodiment,anextendedrealitysystemisprovidedcomprising:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser;oneormoreprocessors;andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser;predictingahapticemojiorahapticsignalbasedontheinputdataandmodelparameterslearnedfromhistoricalinputdataandcontextdata;andtransmittingthehapticsignalordigitalassetsforthehapticemojitoadeviceofaseconduser.

Advantageously,thetactilemessagesaremoreexpressivethanvisualoraudiobasedmessages,andareparticularlyusefulwhenausercan'tvieworlistentovisualoraudiobasedmessages.

ExtendedRealitySystemOverview

Thisdisclosurecontemplatesanysuitablenetwork120.Asanexampleandnotbywayoflimitation,oneormoreportionsofanetwork120mayincludeanadhocnetwork,anintranet,anextranet,avirtualprivatenetwork(VPN),alocalareanetwork(LAN),awirelessLAN(WLAN),awideareanetwork(WAN),awirelessWAN(WWAN),ametropolitanareanetwork(MAN),aportionoftheInternet,aportionofthePublicSwitchedTelephoneNetwork(PSTN),acellulartelephonenetwork,oracombinationoftwoormoreofthese.Anetwork120mayincludeoneormorenetworks120.

Links125mayconnectaclientsystem105,avirtualassistantengine110,andaremotesystem115toacommunicationnetwork110ortoeachother.Thisdisclosurecontemplatesanysuitablelinks125.Inparticularembodiments,oneormorelinks125includeoneormorewireline(suchasforexampleDigitalSubscriberLine(DSL)orDataOverCableServiceInterfaceSpecification(DOCSIS)),wireless(suchasforexampleWi-FiorWorldwideInteroperabilityforMicrowaveAccess(WiMAX)),oroptical(suchasforexampleSynchronousOpticalNetwork(SONET)orSynchronousDigitalHierarchy(SDH))links.Inparticularembodiments,oneormorelinks125eachincludeanadhocnetwork,anintranet,anextranet,aVPN,aLAN,aWLAN,aWAN,aWWAN,aMAN,aportionoftheInternet,aportionofthePSTN,acellulartechnology-basednetwork,asatellitecommunicationstechnology-basednetwork,anotherlink125,oracombinationoftwoormoresuchlinks125.Links125neednotnecessarilybethesamethroughoutanetworkenvironment100.Oneormorefirstlinks125maydifferinoneormorerespectsfromoneormoresecondlinks125.

Invariousembodiments,aclientsystem105isanelectronicdeviceincludinghardware,software,orembeddedlogiccomponentsoracombinationoftwoormoresuchcomponentsandcapableofcarryingouttheappropriateextendedrealityfunctionalitiesinaccordancewithtechniquesofthedisclosure.Asanexample,andnotbywayoflimitation,aclientsystem105mayincludeadesktopcomputer,notebookorlaptopcomputer,netbook,atabletcomputer,e-bookreader,GPSdevice,camera,personaldigitalassistant(PDA),handheldelectronicdevice,cellulartelephone,smartphone,aVR.MR,AR,orVRheadsetsuchasanAR/VRHMD,othersuitableelectronicdevicecapableofdisplayingextendedrealitycontent,oranysuitablecombinationthereof.Inparticularembodiments,theclientsystem105isanAR/VRHMDasdescribedindetailwithrespecttoFIG.2.Thisdisclosurecontemplatesanysuitableclientsystem105configuredtogenerateandoutputextendedrealitycontenttotheuser.Theclientsystem105mayenableitsusertocommunicatewithotherusersatotherclientsystems105.

Auserattheclientsystem105mayusethevirtualassistantapplication130tointeractwiththevirtualassistantengine110.Insomeinstances,thevirtualassistantapplication130isastand-aloneapplicationorintegratedintoanotherapplicationsuchasasocial-networkingapplicationoranothersuitableapplication(e.g.,anartificialsimulationapplication).Insomeinstances,thevirtualassistantapplication130isintegratedintotheclientsystem105(e.g.,partoftheoperatingsystemoftheclientsystem105),anassistanthardwaredevice,oranyothersuitablehardwaredevices.Insomeinstances,thevirtualassistantapplication130maybeaccessedviaawebbrowser135.Insomeinstances,thevirtualassistantapplication130passivelylistenstoandwatchesinteractionsoftheuserinthereal-world,andprocesseswhatithearsandsees(e.g.,explicitinputsuchasaudiocommandsorinterfacecommands,contextualawarenessderivedfromaudioorphysicalactionsoftheuser,objectsinthereal-world,environmentaltriggerssuchasweatherortime,andthelike)inordertointeractwiththeuserinanintuitivemanner.

Invariousembodiments,aremotesystem115mayincludeoneormoretypesofservers,oneormoredatastores,oneormoreinterfaces,includingbutnotlimitedtoAPIs,oneormorewebservices,oneormorecontentsources,oneormorenetworks,oranyothersuitablecomponents,e.g.,thatserversmaycommunicatewith.Aremotesystem115maybeoperatedbyasameentityoradifferententityfromanentityoperatingthevirtualassistantengine110.Inparticularembodiments,however,thevirtualassistantengine110andthird-partysystems115mayoperateinconjunctionwitheachothertoprovidevirtualcontenttousersoftheclientsystem105.Forexample,asocial-networkingsystem145mayprovideaplatform,orbackbone,whichothersystems,suchasthird-partysystems,mayusetoprovidesocial-networkingservicesandfunctionalitytousersacrosstheInternet,andthevirtualassistantengine110mayaccessthesesystemstoprovidevirtualcontentontheclientsystem105.

Theremotesystem115mayincludeacontentobjectprovider150.Acontentobjectprovider150includesoneormoresourcesofvirtualcontentobjects,whichmaybecommunicatedtotheclientsystem105.Asanexample,andnotbywayoflimitation,virtualcontentobjectsmayincludeinformationregardingthingsoractivitiesofinteresttotheuser,suchas,forexample,movieshowtimes,moviereviews,restaurantreviews,restaurantmenus,productinformationandreviews,instructionsonhowtoperformvarioustasks,exerciseregimens,cookingrecipes,orothersuitableinformation.Asanotherexampleandnotbywayoflimitation,contentobjectsmayincludeincentivecontentobjects,suchascoupons,discounttickets,giftcertificates,orothersuitableincentiveobjects.Asanotherexampleandnotbywayoflimitation,contentobjectsmayincludevirtualobjectssuchasvirtualinterfaces,2Dor3Dgraphics,mediacontent,orothersuitablevirtualobjects.

IntheexampleshowninFIG.2A,virtualinformationorobjects240,245aremappedatapositionrelativetoaphysicalobject235.Asshouldbeunderstood,thevirtualimagery(e.g.,virtualcontentsuchasinformationorobjects240,245andvirtualuserinterface250)doesnotexistinthereal-world,physicalenvironment.Virtualuserinterface250maybefixed,asrelativetotheuser220,theuser'shand230,physicalobjects235,orothervirtualcontentsuchasvirtualinformationorobjects240,245,forinstance.Asaresult,clientsystem200renders,atauserinterfacepositionthatislockedrelativetoapositionoftheuser220,theuser'shand230,physicalobjects235,orothervirtualcontentintheextendedrealityenvironment,virtualuserinterface250fordisplayatextendedrealitysystem205aspartofextendedrealitycontent225.Asusedherein,avirtualelement‘locked’toapositionofvirtualcontentorphysicalobjectisrenderedatapositionrelativetothepositionofthevirtualcontentorphysicalobjectsoastoappeartobepartoforotherwisetiedintheextendedrealityenvironmenttothevirtualcontentorphysicalobject.

Clientsystem200maytriggergenerationandrenderingofvirtualcontentbasedonacurrentfieldofviewofuser220,asmaybedeterminedbyreal-timegaze255trackingoftheuser,orotherconditions.Morespecifically,imagecapturedevicesofthesensors215captureimagedatarepresentativeofobjectsintherealworld,physicalenvironmentthatarewithinafieldofviewofimagecapturedevices.Duringoperation,theclientsystem200performsobjectrecognitionwithinimagedatacapturedbytheimagecapturedevicesofextendedrealitysystem205toidentifyobjectsinthephysicalenvironmentsuchastheuser220,theuser'shand230,and/orphysicalobjects235.Further,theclientsystem200trackstheposition,orientation,andconfigurationoftheobjectsinthephysicalenvironmentoveraslidingwindowoftime.Fieldofviewtypicallycorrespondswiththeviewingperspectiveoftheextendedrealitysystem205.Insomeexamples,theextendedrealityapplicationpresentsextendedrealitycontent225comprisingmixedrealityand/oraugmentedreality.

Variousembodimentsdisclosedhereinmayincludeorbeimplementedinconjunctionwithvarioustypesofextendedrealitysystems.Extendedrealitycontentgeneratedbytheextendedrealitysystemsmayincludecompletelycomputer-generatedcontentorcomputer-generatedcontentcombinedwithcaptured(e.g.,real-world)content.Theextendedrealitycontentmayincludevideo,audio,hapticfeedback,orsomecombinationthereof,anyofwhichmaybepresentedinasinglechannelorinmultiplechannels(suchasstereovideothatproducesathree-dimensional(3D)effecttotheviewer).Additionally,insomeembodiments,extendedrealitymayalsobeassociatedwithapplications,products,accessories,services,orsomecombinationthereof,thatareusedto,forexample,createcontentinanextendedrealityand/orareotherwiseusedin(e.g.,toperformactivitiesin)anextendedreality.

Theextendedrealitysystemsmaybeimplementedinavarietyofdifferentformfactorsandconfigurations.Someextendedrealitysystemsmaybedesignedtoworkwithoutnear-eyedisplays(NEDs).OtherextendedrealitysystemsmayincludeanNEDthatalsoprovidesvisibilityintotherealworld(suchas,e.g.,augmentedrealitysystem300inFIG.3A)orthatvisuallyimmersesauserinanextendedreality(suchas,e.g.,virtualrealitysystem350inFIG.3B).Whilesomeextendedrealitydevicesmaybeself-containedsystems,otherextendedrealitydevicesmaycommunicateand/orcoordinatewithexternaldevicestoprovideanextendedrealityexperiencetoauser.Examplesofsuchexternaldevicesincludehandheldcontrollers,mobiledevices,desktopcomputers,deviceswornbyauser,deviceswornbyoneormoreotherusers,and/oranyothersuitableexternalsystem.

AsshowninFIG.3A,augmentedrealitysystem300mayincludeaneyeweardevice305withaframe310configuredtoholdaleftdisplaydevice315(A)andarightdisplaydevice315(B)infrontofauser'seyes.Displaydevices315(A)and315(B)mayacttogetherorindependentlytopresentanimageorseriesofimagestoauser.Whileaugmentedrealitysystem300includestwodisplays,embodimentsofthisdisclosuremaybeimplementedinaugmentedrealitysystemswithasingleNEDormorethantwoNEDs.

Insomeembodiments,augmentedrealitysystem300mayincludeoneormoresensors,suchassensor320.Sensor320maygeneratemeasurementsignalsinresponsetomotionofaugmentedrealitysystem300andmaybelocatedonsubstantiallyanyportionofframe310.Sensor320mayrepresentoneormoreofavarietyofdifferentsensingmechanisms,suchasapositionsensor,aninertialmeasurementunit(IMU),adepthcameraassembly,astructuredlightemitterand/ordetector,oranycombinationthereof.Insomeembodiments,augmentedrealitysystem300mayormaynotincludesensor320ormayincludemorethanonesensor.Inembodimentsinwhichsensor320includesanIMU,theIMUmaygeneratecalibrationdatabasedonmeasurementsignalsfromsensor320.Examplesofsensor320mayinclude,withoutlimitation,accelerometers,gyroscopes,magnetometers,othersuitabletypesofsensorsthatdetectmotion,sensorsusedforerrorcorrectionoftheIMU,orsomecombinationthereof.

Insomeexamples,augmentedrealitysystem300mayalsoincludeamicrophonearraywithapluralityofacoustictransducers325(A)-325(J),referredtocollectivelyasacoustictransducers325.Acoustictransducers325mayrepresenttransducersthatdetectairpressurevariationsinducedbysoundwaves.Eachacoustictransducer325maybeconfiguredtodetectsoundandconvertthedetectedsoundintoanelectronicformat(e.g.,ananalogordigitalformat).ThemicrophonearrayinFIG.3Amayinclude,forexample,tenacoustictransducers:325(A)and325(B),whichmaybedesignedtobeplacedinsideacorrespondingearoftheuser,acoustictransducers325(C),325(D),325(E),325(F),325(G),and325(H),whichmaybepositionedatvariouslocationsonframe310,and/oracoustictransducers325(I)and325(J),whichmaybepositionedonacorrespondingneckband330.

Insomeembodiments,oneormoreofacoustictransducers325(A)-(J)maybeusedasoutputtransducers(e.g.,speakers).Forexample,acoustictransducers325(A)and/or325(B)maybeearbudsoranyothersuitabletypeofheadphoneorspeaker.Theconfigurationofacoustictransducers325ofthemicrophonearraymayvary.Whileaugmentedrealitysystem300isshowninFIG.3ashavingtenacoustictransducers325,thenumberofacoustictransducers325maybegreaterorlessthanten.Insomeembodiments,usinghighernumbersofacoustictransducers325mayincreasetheamountofaudioinformationcollectedand/orthesensitivityandaccuracyoftheaudioinformation.Incontrast,usingalowernumberofacoustictransducers325maydecreasethecomputingpowerrequiredbyanassociatedcontroller335toprocessthecollectedaudioinformation.Inaddition,thepositionofeachacoustictransducer325ofthemicrophonearraymayvary.Forexample,thepositionofanacoustictransducer325mayincludeadefinedpositionontheuser,adefinedcoordinateonframe310,anorientationassociatedwitheachacoustictransducer325,orsomecombinationthereof.

Acoustictransducers325(A)and325(B)maybepositionedondifferentpartsoftheuser'sear,suchasbehindthepinna,behindthetragus,and/orwithintheauricleorfossa.Or,theremaybeadditionalacoustictransducers325onorsurroundingtheearinadditiontoacoustictransducers325insidetheearcanal.Havinganacoustictransducer325positionednexttoanearcanalofausermayenablethemicrophonearraytocollectinformationonhowsoundsarriveattheearcanal.Bypositioningatleasttwoofacoustictransducers325oneithersideofauser'shead(e.g.,asbinauralmicrophones),augmentedrealitysystem300maysimulatebinauralhearingandcapturea3Dstereosoundfieldaroundaboutauser'shead.Insomeembodiments,acoustictransducers325(A)and325(B)maybeconnectedtoaugmentedrealitysystem300viaawiredconnection340,andinotherembodimentsacoustictransducers325(A)and325(B)maybeconnectedtoaugmentedrealitysystem300viaawirelessconnection(e.g.,aBluetoothconnection).Instillotherembodiments,acoustictransducers325(A)and325(B)maynotbeusedatallinconjunctionwithaugmentedrealitysystem300.

Acoustictransducers325onframe310maybepositionedinavarietyofdifferentways,includingalongthelengthofthetemples,acrossthebridge,aboveorbelowdisplaydevices315(A)and315(B),orsomecombinationthereof.Acoustictransducers325mayalsobeorientedsuchthatthemicrophonearrayisabletodetectsoundsinawiderangeofdirectionssurroundingtheuserwearingtheaugmentedrealitysystem300.Insomeembodiments,anoptimizationprocessmaybeperformedduringmanufacturingofaugmentedrealitysystem300todeterminerelativepositioningofeachacoustictransducer325inthemicrophonearray.

Insomeexamples,augmentedrealitysystem300mayincludeorbeconnectedtoanexternaldevice(e.g.,apaireddevice),suchasneckband330.Neckband330generallyrepresentsanytypeorformofpaireddevice.Thus,thefollowingdiscussionofneckband330mayalsoapplytovariousotherpaireddevices,suchaschargingcases,smartwatches,smartphones,wristbands,otherwearabledevices,hand-heldcontrollers,tabletcomputers,laptopcomputers,otherexternalcomputedevices,etc.

Asshown,neckband330maybecoupledtoeyeweardevice305viaoneormoreconnectors.Theconnectorsmaybewiredorwirelessandmayincludeelectricaland/ornon-electrical(e.g.,structural)components.Insomecases,eyeweardevice305andneckband330mayoperateindependentlywithoutanywiredorwirelessconnectionbetweenthem.WhileFIG.3Aillustratesthecomponentsofeyeweardevice305andneckband330inexamplelocationsoneyeweardevice305andneckband330,thecomponentsmaybelocatedelsewhereand/ordistributeddifferentlyoneyeweardevice305and/orneckband330.Insomeembodiments,thecomponentsofeyeweardevice305andneckband330maybelocatedononeormoreadditionalperipheraldevicespairedwitheyeweardevice305,neckband330,orsomecombinationthereof.

Neckband330maybecommunicativelycoupledwitheyeweardevice305and/ortootherdevices.Theseotherdevicesmayprovidecertainfunctions(e.g.,tracking,localizing,depthmapping,processing,storage,etc.)toaugmentedrealitysystem300.IntheembodimentofFIG.3A,neckband330mayincludetwoacoustictransducers(e.g.,325(I)and325(J))thatarepartofthemicrophonearray(orpotentiallyformtheirownmicrophonesubarray).Neckband330mayalsoincludeacontroller342andapowersource345.

Acoustictransducers325(I)and325(J)ofneckband330maybeconfiguredtodetectsoundandconvertthedetectedsoundintoanelectronicformat(analogordigital).IntheembodimentofFIG.3A,acoustictransducers325(I)and325(J)maybepositionedonneckband330,therebyincreasingthedistancebetweentheneckbandacoustictransducers325(I)and325(J)andotheracoustictransducers325positionedoneyeweardevice305.Insomecases,increasingthedistancebetweenacoustictransducers325ofthemicrophonearraymayimprovetheaccuracyofbeamformingperformedviathemicrophonearray.Forexample,ifasoundisdetectedbyacoustictransducers325(C)and325(D)andthedistancebetweenacoustictransducers325(C)and325(D)isgreaterthan,e.g.,thedistancebetweenacoustictransducers325(D)and325(E),thedeterminedsourcelocationofthedetectedsoundmaybemoreaccuratethanifthesoundhadbeendetectedbyacoustictransducers325(D)and325(E).

Powersource345inneckband330mayprovidepowertoeyeweardevice305and/ortoneckband330.Powersource345mayinclude,withoutlimitation,lithium-ionbatteries,lithium-polymerbatteries,primarylithiumbatteries,alkalinebatteries,oranyotherformofpowerstorage.Insomecases,powersource345maybeawiredpowersource.Includingpowersource345onneckband330insteadofoneyeweardevice305mayhelpbetterdistributetheweightandheatgeneratedbypowersource345.

Asnoted,someextendedrealitysystemsmay,insteadofblendinganextendedrealitywithactualreality,substantiallyreplaceoneormoreofauser'ssensoryperceptionsoftherealworldwithavirtualexperience.Oneexampleofthistypeofsystemisahead-worndisplaysystem,suchasvirtualrealitysystem350inFIG.3B,thatmostlyorcompletelycoversauser'sfieldofview.Virtualrealitysystem350mayincludeafrontrigidbody355andaband360shapedtofitaroundauser'shead.Virtualrealitysystem1700mayalsoincludeoutputaudiotransducers365(A)and365(B).Furthermore,whilenotshowninFIG.3B,frontrigidbody355mayincludeoneormoreelectronicelements,includingoneormoreelectronicdisplays,oneormoreinertialmeasurementunits(IMUs),oneormoretrackingemittersordetectors,and/oranyothersuitabledeviceorsystemforcreatinganextendedrealityexperience.

Inadditiontoorinsteadofusingdisplayscreens,someoftheextendedrealitysystemsdescribedhereinmayincludeoneormoreprojectionsystems.Forexample,displaydevicesinaugmentedrealitysystem300and/orvirtualrealitysystem350mayincludemicro-LEDprojectorsthatprojectlight(using,e.g.,awaveguide)intodisplaydevices,suchasclearcombinerlensesthatallowambientlighttopassthrough.Thedisplaydevicesmayrefracttheprojectedlighttowardauser'spupilandmayenableausertosimultaneouslyviewbothextendedrealitycontentandtherealworld.Thedisplaydevicesmayaccomplishthisusinganyofavarietyofdifferentopticalcomponents,includingwaveguidecomponents(e.g.,holographic,planar,diffractive,polarized,and/orreflectivewaveguideelements),light-manipulationsurfacesandelements(suchasdiffractive,reflective,andrefractiveelementsandgratings),couplingelements,etc.Extendedrealitysystemsmayalsobeconfiguredwithanyothersuitabletypeorformofimageprojectionsystem,suchasretinalprojectorsusedinvirtualretinadisplays.

Theextendedrealitysystemsdescribedhereinmayalsoincludevarioustypesofcomputervisioncomponentsandsubsystems.Forexample,augmentedrealitysystem300and/orvirtualrealitysystem350mayincludeoneormoreopticalsensors,suchastwo-dimensional(2D)or3Dcameras,structuredlighttransmittersanddetectors,time-of-flightdepthsensors,single-beamorsweepinglaserrangefinders,3DLiDARsensors,and/oranyothersuitabletypeorformofopticalsensor.Anextendedrealitysystemmayprocessdatafromoneormoreofthesesensorstoidentifyalocationofauser,tomaptherealworld,toprovideauserwithcontextaboutreal-worldsurroundings,and/ortoperformavarietyofotherfunctions.

Theextendedrealitysystemsdescribedhereinmayalsoincludeoneormoreinputand/oroutputaudiotransducers.Outputaudiotransducersmayincludevoicecoilspeakers,ribbonspeakers,electrostaticspeakers,piezoelectricspeakers,boneconductiontransducers,cartilageconductiontransducers,tragus-vibrationtransducers,and/oranyothersuitabletypeorformofaudiotransducer.Similarly,inputaudiotransducersmayincludecondensermicrophones,dynamicmicrophones,ribbonmicrophones,and/oranyothertypeorformofinputtransducer.Insomeembodiments,asingletransducermaybeusedforbothaudioinputandaudiooutput.

Insomeembodiments,theextendedrealitysystemsdescribedhereinmayalsoincludetactile(e.g.,haptic)feedbacksystems,whichmaybeincorporatedintoheadwear,gloves,bodysuits,handheldcontrollers,environmentaldevices(e.g.,chairs,floormats,etc.),and/oranyothertypeofdeviceorsystem.Hapticfeedbacksystemsmayprovidevarioustypesofcutaneousfeedback,includingvibration,force,traction,texture,and/ortemperature.Hapticfeedbacksystemsmayalsoprovidevarioustypesofkinestheticfeedback,suchasmotionandcompliance.Hapticfeedbackmaybeimplementedusingmotors,piezoelectricactuators,fluidicsystems,and/oravarietyofothertypesoffeedbackmechanisms.Hapticfeedbacksystemsmaybeimplementedindependentofotherextendedrealitydevices,withinotherextendedrealitydevices,and/orinconjunctionwithotherextendedrealitydevices.

Byprovidinghapticsensations,audiblecontent,and/orvisualcontent,extendedrealitysystemsmaycreateanentirevirtualexperienceorenhanceauser'sreal-worldexperienceinavarietyofcontextsandenvironments.Forinstance,extendedrealitysystemsmayassistorextendauser'sperception,memory,orcognitionwithinaparticularenvironment.Somesystemsmayenhanceauser'sinteractionswithotherpeopleintherealworldormayenablemoreimmersiveinteractionswithotherpeopleinavirtualworld.Extendedrealitysystemsmayalsobeusedforeducationalpurposes(e.g.,forteachingortraininginschools,hospitals,governmentorganizations,militaryorganizations,businessenterprises,etc.),entertainmentpurposes(e.g.,forplayingvideogames,listeningtomusic,watchingvideocontent,etc.),and/orforaccessibilitypurposes(e.g.,ashearingaids,visualaids,etc.).Theembodimentsdisclosedhereinmayenableorenhanceauser'sextendedrealityexperienceinoneormoreofthesecontextsandenvironmentsand/orinothercontextsandenvironments.

Asnoted,extendedrealitysystems300and350maybeusedwithavarietyofothertypesofdevicestoprovideamorecompellingextendedrealityexperience.Thesedevicesmaybehapticinterfaceswithtransducersthatprovidehapticfeedbackand/orthatcollecthapticinformationaboutauser'sinteractionwithanenvironment.Theextendedrealitysystemsdisclosedhereinmayincludevarioustypesofhapticinterfacesthatdetectorconveyvarioustypesofhapticinformation,includingtactilefeedback(e.g.,feedbackthatauserdetectsvianervesintheskin,whichmayalsobereferredtoascutaneousfeedback)and/orkinestheticfeedback(e.g.,feedbackthatauserdetectsviareceptorslocatedinmuscles,joints,and/ortendons).

Oneormorevibrotactiledevices420maybepositionedatleastpartiallywithinoneormorecorrespondingpocketsformedintextilematerial415ofvibrotactilesystem400.Vibrotactiledevices420maybepositionedinlocationstoprovideavibratingsensation(e.g.,hapticfeedback)toauserofvibrotactilesystem400.Forexample,vibrotactiledevices420maybepositionedagainsttheuser'sfinger(s),thumb,orwrist,asshowninFIG.4A.Vibrotactiledevices420may,insomeexamples,besufficientlyflexibletoconformtoorbendwiththeuser'scorrespondingbodypart(s).

Apowersource425(e.g.,abattery)forapplyingavoltagetothevibrotactiledevices420foractivationthereofmaybeelectricallycoupledtovibrotactiledevices420,suchasviaconductivewiring430.Insomeexamples,eachofvibrotactiledevices420maybeindependentlyelectricallycoupledtopowersource425forindividualactivation.Insomeembodiments,aprocessor435maybeoperativelycoupledtopowersource425andconfigured(e.g.,programmed)tocontrolactivationofvibrotactiledevices420.

Vibrotactilesystem400mayoptionallyincludeothersubsystemsandcomponents,suchastouch-sensitivepads450,pressuresensors,motionsensors,positionsensors,lightingelements,and/oruserinterfaceelements(e.g.,anon/offbutton,avibrationcontrolelement,etc.).Duringuse,vibrotactiledevices420maybeconfiguredtobeactivatedforavarietyofdifferentreasons,suchasinresponsetotheuser'sinteractionwithuserinterfaceelements,asignalfromthemotionorpositionsensors,asignalfromthetouch-sensitivepads450,asignalfromthepressuresensors,asignalfromtheotherdeviceorsystem440,etc.

Althoughpowersource425,processor435,andcommunicationsinterface445areillustratedinFIG.4Aasbeingpositionedinhapticdevice410,thepresentdisclosureisnotsolimited.Forexample,oneormoreofpowersource425,processor435,orcommunicationsinterface445maybepositionedwithinhapticdevice405orwithinanotherwearabletextile.

Hapticwearables,suchasthoseshowninanddescribedinconnectionwithFIG.4A,maybeimplementedinavarietyoftypesofextendedrealitysystemsandenvironments.FIG.4Bshowsanexampleextendedrealityenvironment460includingonehead-mountedvirtualrealitydisplayandtwohapticdevices(e.g.,gloves),andinotherembodimentsanynumberand/orcombinationofthesecomponentsandothercomponentsmaybeincludedinanextendedrealitysystem.Forexample,insomeembodimentstheremaybemultiplehead-mounteddisplayseachhavinganassociatedhapticdevice,witheachhead-mounteddisplayandeachhapticdevicecommunicatingwiththesameconsole,portablecomputingdevice,orothercomputingsystem.

Whilehapticinterfacesmaybeusedwithvirtualrealitysystems,asshowninFIG.4B,hapticinterfacesmayalsobeusedwithaugmentedrealitysystems,asshowninFIG.4C.FIG.4Cisaperspectiveviewofauser475interactingwithanaugmentedrealitysystem480.Inthisexample,user475maywearapairofaugmentedrealityglasses485thatmayhaveoneormoredisplays487andthatarepairedwithahapticdevice490.Inthisexample,hapticdevice490maybeawristbandthatincludesapluralityofbandelements492andatensioningmechanism495thatconnectsbandelements492tooneanother.

Oneormoreofbandelements492mayincludeanytypeorformofactuatorsuitableforprovidinghapticfeedback.Forexample,oneormoreofbandelements492maybeconfiguredtoprovideoneormoreofvarioustypesofcutaneousfeedback,includingvibration,force,traction,texture,and/ortemperature.Toprovidesuchfeedback,bandelements492mayincludeoneormoreofvarioustypesofactuators.Inoneexample,eachofbandelements492mayincludeavibrotactor(e.g.,avibrotactileactuator)configuredtovibrateinunisonorindependentlytoprovideoneormoreofvarioustypesofhapticsensationstoauser.Alternatively,onlyasinglebandelementorasubsetofbandelementsmayincludevibrotactors.

Hapticdevices405,410,470,and490mayincludeanysuitablenumberand/ortypeofhaptictransducer,sensor,and/orfeedbackmechanism.Forexample,hapticdevices405,410,470,and490mayincludeoneormoremechanicaltransducers,piezoelectrictransducers,and/orfluidictransducers.Hapticdevices405,410,470,and490mayalsoincludevariouscombinationsofdifferenttypesandformsoftransducersthatworktogetherorindependentlytoenhanceauser'sextendedrealityexperience.Inoneexample,eachofbandelements492ofhapticdevice490mayincludeavibrotactor(e.g.,avibrotactileactuator)configuredtovibrateinunisonorindependentlytoprovideoneormoreofvarioustypesofhapticsensationstoauser.

Insomeembodiments,thedata525obtainedviatheclientsystem505isassociatedwithoneormoreprivacysettings.Thedata525maybestoredonorotherwiseassociatedwithanysuitablecomputingsystemorapplication,suchas,forexample,asocial-networkingsystem,aclientsystem,athird-partysystem,amessagingapplication,aphoto-sharingapplication,abiometricdataacquisitionapplication,anartificial-realityapplication,avirtualassistantapplication,and/oranyothersuitablecomputingsystemorapplication.

Insomeembodiments,privacysettingsforthedata525mayspecifya“blockedlist”ofusersorotherentitiesthatshouldnotbeallowedtoaccesscertaininformationassociatedwiththedata525.Insomecases,theblockedlistmayincludethird-partyentities.Theblockedlistmayspecifyoneormoreusersorentitiesforwhichthedata525isnotvisible.

Privacysettingsassociatedwiththedata525mayspecifyanysuitablegranularityofpermittedaccessordenialofaccess.Asanexample,accessordenialofaccessmaybespecifiedforparticularusers(e.g.,onlyme,myroommates,myboss),userswithinaparticulardegree-of-separation(e.g.,friends,friends-of-friends),usergroups(e.g.,thegamingclub,myfamily),usernetworks(e.g.,employeesofparticularemployers,studentsoralumniofparticularuniversity),allusers(“public”),nousers(“private”),usersofthird-partysystems,particularapplications(e.g.,third-partyapplications,externalwebsites),othersuitableentities,oranysuitablecombinationthereof.Insomeembodiments,differentpiecesofthedata525ofthesametypeassociatedwithausermayhavedifferentprivacysettings.Inaddition,oneormoredefaultprivacysettingsmaybesetforeachpieceofdata525ofaparticulardata-type.

Althoughthesocialcommunicationplatform500isdescribedwithregardtogeneratingthehapticsignal535attheclientsystem505(a)ofthesendinguser,itshouldbeunderstoodthatthehapticsignal535canalternativelybegeneratedattheclientsystem505(b)ofthereceivinguseroracompletelydifferentremotesystem(e.g.,adistributedsocialnetworkingsystem)usingsimilarcomponentsandtechniquesdescribedherein.Moreover,thesocialcommunicationplatform500illustratesaone-wayhapticcommunicationwherethesendingusersendsahapticsignaltothereceivinguser,howeveritshouldbeunderstoodthatthehapticcommunicationcanbebidirectionalandtheclientsystem505(b)ofthereceivingusercouldhavesimilarcomponentsasdescribedwithrespecttotheclientsystem505(a)ofthesendinguserandlikewisetheclientsystem505(a)ofthesendingusercouldhavesimilarcomponentsasdescribedwithrespecttotheclientsystem505(b)ofthereceivinguser.Further,asendingusercanbroadcastthehapticsignalvianetwork540toapluralityofclientsystems505(b-n)associatedwithreceivingusersinsteadofasinglereceivinguser.

TouchCommunicationTechniques

TouchCommunicationUsingaLexiconofEmojis

FIG.6Aisablockdiagramillustratingcomponentsofasocialcommunicationsystem600forconvertinginputdata605tohapticoutput610usingalexiconofemojis615inaccordancewithvariousembodiments.Togeneratethehapticoutput610,inputdata605fromafirstuser(sendinguser)isprocessedbyanalgorithmusingthelexiconofemojis615toobtainacorrespondinghapticsignalthatistransmittedtoaseconduser(receivinguser)tooperatethehapticfeedbackdevice.Thehapticfeedbackdevicereceivesthetransmittedhapticsignals,translatesthehapticsignalsintothehapticoutput610,andtransmitsthehapticoutput610correspondingtothereceivedhapticsignalstoabodyoftheseconduser.

Insomeinstances,thelexiconofemojis615maybekey-valuestore,orkey-valuedatabase,whichisatypeofdatastoragesoftwareprogramthatstoresdataasasetofuniqueidentifiers,eachofwhichhaveanassociatedvalue.Thisdatapairingisknownasa“key-valuepair.”Theuniqueidentifieristhe“key”foranitemofdata,andavalueiseitherthedatabeingidentifiedorthelocationofthatdata.Although,thelexiconofemojis615isdescribedhereinasakey-valuedatabaseitshouldbeunderstoodthatotherdatabasedesignscouldbeusedwithoutdepartingfromthespiritandscopeofthepresentdisclosure.Forexampleinotherinstances,thelexiconofemojis615isarelationaldatabase,wheredataisstoredintablescomposedofrowsandcolumns.Thedatabasedeveloperspecifiesattributesofthedata(i.e.,emojisandassetsthereof)tobestoredinthetableupfront.Thiscreatessignificantopportunitiesforoptimizationssuchasdatacompressionandperformancearoundaggregationsanddataaccess.Theattributesofthedatamaybequeriedinasimilarfashionaskeysinthekey-valuedatabasetoidentifyemojisassociatedwithsuchattributes.

Thelexiconofemojis615maycomprisesanynumberofemojis620(A-N).Eachoftheemojis620isconfiguredwithacorrespondingelectroniccommunicationthatincludesavisualcomponent(showninFIG.6Basthecharacterineachillustration),anaudiocomponent(showninFIG.6Bastheverbalutteranceineachillustration),ahapticcomponent(showninFIG.6Casthehapticsignalpatternineachillustration),oracombinationthereof.Emojiswithavisualcomponent(e.g.,apictogram,logogram,orideogram)areassociatedwithinthelexicontoanimageorvideoasset(e.g.,ajpeg,gif,mov,orjsonfile).Emojiswithanaudiocomponentareassociatedwithinthelexicontoanaudioasset(e.g.,awayormp3file).Emojiswithahapticcomponentareassociatedwithinthelexicontoahapticsignal(e.g.,parameterinformationoninterval,pitch,amplitude,oracombinationthereofforatouchmessagetobeperceivedbyareceivinguser'sbody),whichcanbeconvertedintohapticoutput615.

Thehapticsignalforeachemojimaybepre-generated.Insomeinstances,thehapticsignalisconfiguredwithparameterinformationforinterval,pitch,andamplitudetogeneratepatternsforthehapticoutput610thatmatchtheimageoranimationoftheemojiand/orthesoundeffectoftheemoji(i.e.,theimageoraudiocomponentsupplementtheunderstandingofthehapticcomponent).Inotherinstances,thehapticsignalisconfiguredwithparameterinformationdeterminedbyauser(e.g.,aperceptualscientist)togeneratepatternsforthehapticoutput610thatbestcommunicatetheemotiontoauser(i.e.,thehapticcomponenthasahighlikelihoodofconveyingtheemotiontoauserwithouttheimageoraudiocomponent).Inotherinstances,thehapticsignalisconfiguredwithparameterinformationdeterminedbyauser(e.g.,auseroftheHMDdevice)togeneratepatternsforthehapticoutput610thatcustomizetouchcommunicatetoauser(i.e.,thehapticcomponentiscustomizedforconveyingtheemotiontoauserwithorwithouttheimageoraudiocomponent).

Alexiconsignalconverter625convertstheinputdata605intohapticsignals610usingthelexiconofemojis615.Thelexiconsignalconverter620maybeacomponentinasignalgenerator(e.g.,signalgenerator555describedwithrespecttoFIG.5).Thelexiconsignalconverter620comprisesaninputdataprocessingmodule630,apatternrecognitionmodule635,andaqueryengine640.Theinputprocessingmodule625determinesthecharacteristicsoftheinputdata605received(e.g.,text,audio,imagesorvideo,sensordata,orthelike)usingtheinputdatamodule630,identifiesakeyorattributeswithintheinputdata605usingthepatternrecognitionmodule635,andcommunicatesthekeyorattributestothequeryengine640forsearchingthelexiconofemojis615toidentifyoneormoreemojisassociatedwithanelectroniccommunication.

FIG.7isaflowchartillustratingaprocess700forconvertinginputdatatohapticoutputusingalexiconofemojisaccordingtovariousembodiments.TheprocessingdepictedinFIG.7maybeimplementedinsoftware(e.g.,code,instructions,program)executedbyoneormoreprocessingunits(e.g.,processors,cores)oftherespectivesystems,hardware,orcombinationsthereof.Thesoftwaremaybestoredonanon-transitorystoragemedium(e.g.,onamemorydevice).ThemethodpresentedinFIG.7anddescribedbelowisintendedtobeillustrativeandnon-limiting.AlthoughFIG.7depictsthevariousprocessingstepsoccurringinaparticularsequenceororder,thisisnotintendedtobelimiting.Incertainalternativeembodiments,thestepsmaybeperformedinsomedifferentorderorsomestepsmayalsobeperformedinparallel.Incertainembodiments,suchasinanembodimentdepictedinFIG.1,2A,2B,3A,3B,4A,4B,4C,5,or6A-6C,theprocessingdepictedinFIG.7maybeperformedbyasocialcommunicationplatformorsystemthatfacilitatestouchcommunicationwithusers.

Atstep705,inputdataisobtainedfromaclientsystemofafirstuser(e.g.,capturedusingoneormoresensors).Insomeinstances,theoneormoresensorscaptureinputdataincludingimagesofavisualfieldofthefirstuserwearingahead-mounteddevicecomprisingadisplaytodisplaycontenttothefirstuser.Theinputdataincludes:(i)dataregardingactivityoftheuserinanextendedrealityenvironment(e.g.,imagesandaudiooftheuserinteractinginthephysicalenvironmentand/orthevirtualenvironment),(ii)datafromexternalsystems,or(iii)both.Insomeinstances,thedataregardingactivityoftheuserincludestext,audio,imagesorvideo,sensordata,orthelike.

Atstep710,featuresareextractedfromtheinputdatathatcorrespondtoanelectroniccommunication.Theextractingcomprisesdeterminingcharacteristicsoftheinputdataandidentifyingpatternswithintheinputdatathatcorrespondtoakeyorattributesofelectroniccommunicationbasedonthecharacteristics.Thekeyorattributesaretheextractedfeatures.

Atstep715,anemoji(e.g.,ahapticemoji)isidentifiedfromalexiconofemojisbasedontheextractedfeatures.Theidentifyingtheemojicomprisesconstructingaqueryusingtheextractedfeaturesasparametersofthequeryandexecutingthequeryonthelexiconofemojis.

Atstep720,digitalassetsareobtainedfortheemoji.Thedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput.Insomeinstances,thedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.Thehapticsignalfortheemojimaybepre-generated.Insomeinstances,thehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.Insomeinstances,thehapticsignalisconfiguredwithparameterinformationforinterval,pitch,andamplitudetogeneratepatternsforthehapticoutputthatmatchtheimageoranimationoftheemojiand/orthesoundeffectoftheemoji.Inotherinstances,thehapticsignalisconfiguredwithparameterinformationdeterminedbyauser(e.g.,thefirstuseroranotheruser)togeneratepatternsforthehapticoutputthatcommunicateanemotionviatouchcommunicationtotheseconduser.

Atstep725,thedigitalassetsaretransmittedtoadeviceofaseconduser.Insomeinstances,thedeviceisanotherhead-mounteddevice.Thedeviceisconfiguredtoconvertthehapticsignaltothehapticoutputbasedontheparameterinformationinordertoconveyatouchmessageasatleastpartoftheelectroniccommunicationtotheseconduserviaahapticdevice.Insomeinstances,thehapticoutputisgeneratedwithvirtualcontent(e.g.,theimageoranimationoftheemojiand/orthesoundeffectoftheemoji)thatisgeneratedandrenderedbytheclientsystemintheextendedrealityenvironmentdisplayedtotheuserbasedonthedigitalassets(e.g.,theimageorvideoasset,theaudioasset,orboth).

TouchCommunicationUsingAIBasedSystem

Apredictionmodel825canbeamachine-learningmodel,suchasaconvolutionalneuralnetwork(“CNN”),e.g.,aninceptionneuralnetwork,aresidualneuralnetwork(“Resnet”),orarecurrentneuralnetwork,e.g.,longshort-termmemory(“LSTM”)modelsorgatedrecurrentunits(“GRUs”)models,othervariantsofDeepNeuralNetworks(“DNN”)(e.g.,amulti-labeln-binaryDNNclassifierormulti-classDNNclassifier).Apredictionmodel125canalsobeanyothersuitableMLmodeltrainedforprovidingarecommendation,suchasaGenerativeadversarialnetwork(GAN),NaiveBayesClassifier,LinearClassifier,SupportVectorMachine,BaggingModelssuchasRandomForestModel,BoostingModels,ShallowNeuralNetworks,orcombinationsofoneormoreofsuchtechniques—e.g.,CNN-HMMorMCNN(Multi-ScaleConvolutionalNeuralNetwork).Themachine-learningpredictionsystem800mayemploythesametypeofpredictionmodelordifferenttypesofpredictionmodelsforpredictinghapticemojisforconveyingatouchmessage.Stillothertypesofpredictionmodelsmaybeimplementedinotherexamplesaccordingtothisdisclosure.

Totrainthevariouspredictionmodels825,thetrainingstage810iscomprisedoftwomaincomponents:datasetpreparationmodule830andmodeltrainingframework840.Thedatasetpreparationmodule830performstheprocessesofloadingdataassets845,splittingthedataassets845intotrainingandvalidationsets845a-nsothatthesystemcantrainandtestthepredictionmodels825,andpre-processingofdataassets845.Thesplittingthedataassets845intotrainingandvalidationsets845a-nmaybeperformedrandomly(e.g.,a90/10%or70/30%)orthesplittingmaybeperformedinaccordancewithamorecomplexvalidationtechniquesuchasK-FoldCross-Validation,Leave-one-outCross-Validation,Leave-one-group-outCross-Validation,NestedCross-Validation,ortheliketominimizesamplingbiasandoverfitting.

Themodeltrainingstage810outputstrainedmodelsincludingoneormoretrainedpredictionmodels860.Theoneormoretrainedpredictionmodels855maybedeployedandusedintheimplementationstage820topredictahapticemojiorhapticsignal865forconveyingatouchmessage.Forexample,predictionmodels860mayreceiveinputdata870(e.g.,agesturebyafirstuser)orcontextdata(e.g.,atextmessagereceivedbyaseconduser),andpredictahapticemojiorhapticsignalbasedonfeaturesandrelationshipsbetweenfeaturesextractedfromwithintheinputdata870.

FIG.9isaflowchartillustratingaprocess900topredicthapticemojisforconveyingatouchmessageaccordingtovariousembodiments.TheprocessingdepictedinFIG.9maybeimplementedinsoftware(e.g.,code,instructions,program)executedbyoneormoreprocessingunits(e.g.,processors,cores)oftherespectivesystems,hardware,orcombinationsthereof.Thesoftwaremaybestoredonanon-transitorystoragemedium(e.g.,onamemorydevice).ThemethodpresentedinFIG.9anddescribedbelowisintendedtobeillustrativeandnon-limiting.AlthoughFIG.9depictsthevariousprocessingstepsoccurringinaparticularsequenceororder,thisisnotintendedtobelimiting.Incertainalternativeembodiments,thestepsmaybeperformedinsomedifferentorderorsomestepsmayalsobeperformedinparallel.Incertainembodiments,suchasinanembodimentdepictedinFIG.1,2A,2B,3A,3B,4A,4B,4C,5,or8,theprocessingdepictedinFIG.9maybeperformedbyasocialcommunicationplatformorsystemthatfacilitatestouchcommunicationwithusers.

Atstep905,inputdataisobtainedfromaclientsystemofafirstuser(e.g.,capturedusingoneormoresensors).Insomeinstances,theoneormoresensorscaptureinputdataincludingimagesofavisualfieldofthefirstuserwearingahead-mounteddevicecomprisingadisplaytodisplaycontenttothefirstuser.Theinputdataincludes:(i)dataregardingactivityoftheuserinanextendedrealityenvironment(e.g.,imagesandaudiooftheuserinteractinginthephysicalenvironmentand/orthevirtualenvironment),(ii)datafromexternalsystems,or(iii)both.Insomeinstances,thedataregardingactivityoftheuserincludestext,audio,imagesorvideo,sensordata,orthelike.

Atstep910,predictingahapticemojiorahapticsignalbasedontheinputdataandmodelparameterslearnedfromhistoricalinputdata(e.g.,agesturebyafirstuser)andcontextdata(e.g.,atextmessagereceivedbyaseconduser).

Atoptionalstep915(instancesofpredictingahapticemoji),digitalassetsareobtainedfortheemoji.Thedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput.Insomeinstances,thedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.Thehapticsignalfortheemojimaybepre-generated.Insomeinstances,thehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.Insomeinstances,thehapticsignalisconfiguredwithparameterinformationforinterval,pitch,andamplitudetogeneratepatternsforthehapticoutputthatmatchtheimageoranimationoftheemojiand/orthesoundeffectoftheemoji.Inotherinstances,thehapticsignalisconfiguredwithparameterinformationdeterminedbyauser(e.g.,thefirstuseroranotheruser)togeneratepatternsforthehapticoutputthatcommunicateanemotionviatouchcommunicationtotheseconduser.

Atstep920,thedigitalassetsorhapticsignalaretransmittedtoadeviceofaseconduser.Insomeinstances,thedeviceisanotherhead-mounteddevice.Thedeviceisconfiguredtoconvertthehapticsignaltothehapticoutputbasedontheparameterinformationinordertoconveyatouchmessageasatleastpartoftheelectroniccommunicationtotheseconduserviaahapticdevice.Insomeinstances,thehapticoutputisgeneratedwithvirtualcontent(e.g.,theimageoranimationoftheemojiand/orthesoundeffectoftheemoji)thatisgeneratedandrenderedbytheclientsystemintheextendedrealityenvironmentdisplayedtotheuserbasedonthedigitalassets(e.g.,theimageorvideoasset,theaudioasset,orboth).

LearningProgramtoFacilitateLearningoftheHapticOutput

Theinputdata1005maybetext,audio,imagesorvideo,sensordata,orthelike.Theadditionalinformation1030mayincludeatextdescriptionofthetouchcommunicationconveyedbythehapticsignal(e.g.,forawavehapticsignal,thetextcouldsay“sendinguser”waveshelloto“receivinguser”),anaudiocomponentcorrespondingtoahapticsignal(e.g.,alaughingsoundcorrespondingtoaHaHaHahapticsignal),animagecomponentcorrespondingtoahapticsignal(e.g.,acharactergivingathumbsdownforanopehapticsignal),oracombinationthereof.

Inotherinstances,wheretheartificialintelligencebasedsystem1020predictsahapticemojiorhapticsignal,thelearningmodule1025takesasinputthehapticsignal(orcorrespondinghapticemojiinformation)anddetermines,usingoneormorerules,logic,ormachine-learningmodels,additionalinformation1030(e.g.,anaudiocomponentoranimagecomponent)thatcouldbeusedtosupplementthehapticsignal.Forexample,thelearningmodule1025mayuseoneormorerules,logic,ormachine-learningmodelstodetermineatextcomponent,anaudiocomponentand/oranimagecomponentthatcouldbeusedtosupplementthehapticsignal(orcorrespondinghapticemojiinformation),thenretrievethetextcomponent,theaudiocomponentand/ortheimagecomponentfromthedatastoragedevice1035orasecondarydatastoragedevice1040(e.g.,aremotestoragedeviceorthird-partystoragedevice)andforwardalongwiththehapticcomponent.

Thebenefitsandadvantagesofthisapproacharethatthereceivingusermaymoreeasilylearnthehapticoutputpatternsandassociatedmeaningbasedonassociatedvisualand/oraudiocontext.Forexample,thelearningmodule1025maybeconfiguredtotransmitthehapticsignalalongwithavisualand/oraudiosignaltothereceivingusersuchthatwhentheuserfeelsthehapticoutput1010basedonthehapticsignaltheuserconcurrentlyvisualizesonadisplaythevisualsignal(e.g.,avisualemoji)and/orhearstheaudiosignal,theuserlearnstoassociatethehapticoutputpatternwithanassociatedvisualand/oraudiocontext.Thevisualand/oraudiosignalmaybeobtainedaspartoftheadditionalinformation1030andassociatedandtransmittedwiththehapticsignalbythelearningmodule1025.Additionallyoralternatively,thevisualand/oraudiosignalmaybegeneratedbasedontheadditionalinformation1030bythelearningmodule1025,andassociatedandtransmittedwiththehapticsignalbythelearningmodule1025.

FIG.11isaflowchartillustratingaprocess1100forsupplementingahapticsignalwithadditionalinformationtofacilitateauserlearningahapticoutputinaccordancewithvariousembodiments.TheprocessingdepictedinFIG.11maybeimplementedinsoftware(e.g.,code,instructions,program)executedbyoneormoreprocessingunits(e.g.,processors,cores)oftherespectivesystems,hardware,orcombinationsthereof.Thesoftwaremaybestoredonanon-transitorystoragemedium(e.g.,onamemorydevice).ThemethodpresentedinFIG.11anddescribedbelowisintendedtobeillustrativeandnon-limiting.AlthoughFIG.11depictsthevariousprocessingstepsoccurringinaparticularsequenceororder,thisisnotintendedtobelimiting.Incertainalternativeembodiments,thestepsmaybeperformedinsomedifferentorder,orsomestepsmayalsobeperformedinparallel.Incertainembodiments,suchasinanembodimentdepictedinFIG.1,2A,2B,3A,3B,4A,4B,4C,5,6A-6C,8,or10theprocessingdepictedinFIG.11maybeperformedbyasocialcommunicationplatformorsystemthatfacilitatestouchcommunicationwithusers.

Atstep1105,inputdataisobtainedfromaclientsystemofafirstuser(e.g.,capturedusingoneormoresensors).Insomeinstances,theoneormoresensorscaptureinputdataincludingimagesofavisualfieldofthefirstuserwearingahead-mounteddevicecomprisingadisplaytodisplaycontenttothefirstuser.Theinputdataincludes:(i)dataregardingactivityoftheuserinanextendedrealityenvironment(e.g.,imagesandaudiooftheuserinteractinginthephysicalenvironmentand/orthevirtualenvironment),(ii)datafromexternalsystems,or(iii)both.Insomeinstances,thedataregardingactivityoftheuserincludestext,audio,imagesorvideo,sensordata,orthelike.

Atstep1110,anemoji(e.g.,ahapticemoji)orhapticsignalisidentifiedfromalexiconofemojisoranartificialintelligencebasedsystem,asdescribedwithrespecttoFIGS.6A-6C,7,8,and9.

Atstep1115,additionalinformationisobtainedbasedontheemojiorhapticsignal.Theadditionalinformationmayincludeatextdescriptionofthetouchcommunicationconveyedbythehapticsignal(e.g.,forawavehapticsignal,thetextcouldsay“sendinguser”waveshelloto“receivinguser”),anaudiocomponentcorrespondingtoahapticsignal(e.g.,alaughingsoundcorrespondingtoaHaHaHahapticsignal),animagecomponentcorrespondingtoahapticsignal(e.g.,acharactergivingathumbsdownforanopehapticsignal),oracombinationthereof.

Atstep1120,thehapticsignalandadditionalinformationaretransmittedtoadeviceofaseconduser.Insomeinstances,thedeviceisanotherhead-mounteddevice.Thedeviceisconfiguredtoconvertthehapticsignaltothehapticoutputbasedontheparameterinformationinordertoconveyatouchmessageasatleastpartoftheelectroniccommunicationtotheseconduserviaahapticdevice.Thehapticoutputisgeneratedwithvirtualcontent(e.g.,theimageoranimationoftheemojiand/orthesoundeffectoftheemoji),whichisgeneratedandrenderedbytheclientsystemintheextendedrealityenvironmentdisplayedtotheuserbasedontheadditionalinformation(e.g.,thetext,theimageorvideo,theaudio,oranycombinationthereof).

ReceivingtheHapticSignalandGeneratingtheHapticOutput

Theprocessor1215readsinstructionsfromthememory1230andexecutesthemtoperformvariousoperations.Theprocessor1215maybeembodiedusinganysuitableinstructionsetarchitectureandmaybeconfiguredtoexecuteinstructionsdefinedinthatinstructionsetarchitecture.Theprocessor1215maybegeneral-purposeorembeddedprocessorsusinganyofavarietyofinstructionsetarchitectures(ISAs),suchasthex86,PowerPC,SPARC,RISC,ARMorMIPSISAs,oranyothersuitableISA.AlthoughasingleprocessorisillustratedinFIG.12,thesignalgenerator1200mayincludemultipleprocessors.

Thehapticinterfacecircuit1220isacircuitthatinterfaceswiththecutaneousactuators1205.Thehapticinterfacecircuit1220generatesactuatorsignals1210basedoncommandsfromtheprocessor1215.Forthispurpose,thehapticinterfacecircuit1220mayinclude,forexample,adigital-to-analogconverter(DAC)forconvertingdigitalsignalsintoanalogsignals.Thehapticinterfacecircuit1220mayalsoincludeanamplifiertoamplifytheanalogsignalsfortransmittingtheactuatorsignals1210overcablesbetweenthesignalgenerator1200andthecutaneousactuators1205.Insomeembodiments,thehapticinterfacecircuit1220communicateswiththeactuators1205wirelessly.Insuchembodiments,thehapticinterfacecircuit1220includescomponentsformodulatingwirelesssignalsfortransmittingtotheactuator1205overwirelesschannels.

Thecommunicationmodule1225(e.g.,receivingdevice570describedwithrespecttoFIG.5)ishardwareorcombinationsofhardware,firmwareandsoftwareforcommunicatingwithothercomputingdevices.Thecommunicationmodule1225may,forexample,enablethesignalgenerator1200tocommunicatewithasocialnetworkingsystem,atransmittingorsendingclientsystem,oranelectroniccommunicationsourceoverthenetwork.Thecommunicationmodule1225maybeembodiedasanetworkcard.Thememory1230isanon-transitorycomputerreadablestoragemediumforstoringsoftwaremodules.Softwaremodulesstoredinthememory1230mayinclude,amongothers,applications1240andahapticsignalprocessor1245(e.g.,thesignalprocessor547describedwithrespecttoFIG.5).Thememory1230mayincludeothersoftwaremodulesnotillustratedinFIG.8,suchasanoperatingsystem.Theapplications1240mayusehapticoutputviathecutaneousactuators1205toperformvariousfunctions,suchaselectroniccommunication,gaming,andentertainment.

Thesignalgenerator1200asillustratedinFIG.12ismerelyillustrativeandvariousmodificationmaybemadetothesignalgenerator1200.Forexample,insteadofembodyingthesignalgenerator1200asasoftwaremodule,thesignalgenerator1200maybeembodiedasahardwarecircuit,oracombinationofhardwarecircuitsandsoftwaremodules.

FIG.13isaflowchartillustratingaprocess1300forgeneratingahapticoutputinaccordancewithvariousembodiments.TheprocessingdepictedinFIG.13maybeimplementedinsoftware(e.g.,code,instructions,program)executedbyoneormoreprocessingunits(e.g.,processors,cores)oftherespectivesystems,hardware,orcombinationsthereof.Thesoftwaremaybestoredonanon-transitorystoragemedium(e.g.,onamemorydevice).ThemethodpresentedinFIG.13anddescribedbelowisintendedtobeillustrativeandnon-limiting.AlthoughFIG.13depictsthevariousprocessingstepsoccurringinaparticularsequenceororder,thisisnotintendedtobelimiting.Incertainalternativeembodiments,thestepsmaybeperformedinsomedifferentorder,orsomestepsmayalsobeperformedinparallel.Incertainembodiments,suchasinanembodimentdepictedinFIG.1,2A,2B,3A,3B,4A,4B,4C,5,6A-6C,8,10,or12theprocessingdepictedinFIG.13maybeperformedbyasocialcommunicationplatformorsystemthatfacilitatestouchcommunicationwithusers.

Atstep1315,theoneormoreactuatorsignalsaregeneratedbasedontheparametersdeterminedfortheoneormoreactuatorsignals.Thegeneratingoftheoneormoreactuatorsignalsmayincludeperformingdigitaltoanalogconversionofthehapticsignaland/oroneormoreactuatorsignals.

Atstep1320,theoneormoreactuatorsignalsaretransmittedtooneormorecorrespondingcutaneousactuators.

Atstep1325,oneormorecutaneousactuatorsgeneratehapticoutputinaccordancewiththecorrespondingoneormoreactuatorsignals,whichcauseoneormoreofvarioustypesofcutaneousfeedback,includingvibration,force,traction,texture,and/ortemperatureontheseconduser'sbody.Insomeinstances,thehapticoutputisgeneratedwithvirtualcontent(e.g.,theimageoranimationoftheemojiand/orthesoundeffectoftheemoji),whichisgeneratedandrenderedbytheclientsystemintheextendedrealityenvironmentdisplayedtotheuserbasedontheadditionalinformation(e.g.,thetext,theimageorvideo,theaudio,oranycombinationthereof).

ADDITIONALCONSIDERATIONS

Althoughspecificexampleshavebeendescribed,variousmodifications,alterations,alternativeconstructions,andequivalentsarepossible.Examplesarenotrestrictedtooperationwithincertainspecificdataprocessingenvironments,butarefreetooperatewithinapluralityofdataprocessingenvironments.Additionally,althoughcertainexampleshavebeendescribedusingaparticularseriesoftransactionsandsteps,itshouldbeapparenttothoseskilledintheartthatthisisnotintendedtobelimiting.Althoughsomeflowchartsdescribeoperationsasasequentialprocess,manyoftheoperationsmaybeperformedinparallelorconcurrently.Inaddition,theorderoftheoperationsmayberearranged.Aprocessmayhaveadditionalstepsnotincludedinthefigure.Variousfeaturesandaspectsoftheabove-describedexamplesmaybeusedindividuallyorjointly.

Further,whilecertainexampleshavebeendescribedusingaparticularcombinationofhardwareandsoftware,itshouldberecognizedthatothercombinationsofhardwareandsoftwarearealsopossible.Certainexamplesmaybeimplementedonlyinhardware,oronlyinsoftware,orusingcombinationsthereof.Thevariousprocessesdescribedhereinmaybeimplementedonthesameprocessorordifferentprocessorsinanycombination.

Wheredevices,systems,componentsormodulesaredescribedasbeingconfiguredtoperformcertainoperationsorfunctions,suchconfigurationmaybeaccomplished,forexample,bydesigningelectroniccircuitstoperformtheoperation,byprogrammingprogrammableelectroniccircuits(suchasmicroprocessors)toperformtheoperationsuchasbyexecutingcomputerinstructionsorcode,orprocessorsorcoresprogrammedtoexecutecodeorinstructionsstoredonanon-transitorymemorymedium,oranycombinationthereof.Processesmaycommunicateusingavarietyoftechniquesincludingbutnotlimitedtoconventionaltechniquesforinter-processcommunications,anddifferentpairsofprocessesmayusedifferenttechniques,orthesamepairofprocessesmayusedifferenttechniquesatdifferenttimes.

Specificdetailsaregiveninthisdisclosuretoprovideathoroughunderstandingoftheexamples.However,examplesmaybepracticedwithoutthesespecificdetails.Forexample,well-knowncircuits,processes,algorithms,structures,andtechniqueshavebeenshownwithoutunnecessarydetailinordertoavoidobscuringtheexamples.Thisdescriptionprovidesexampleexamplesonly,andisnotintendedtolimitthescope,applicability,orconfigurationofotherexamples.Rather,theprecedingdescriptionoftheexampleswillprovidethoseskilledintheartwithanenablingdescriptionforimplementingvariousexamples.Variouschangesmaybemadeinthefunctionandarrangementofelements.

Thespecificationanddrawingsare,accordingly,toberegardedinanillustrativeratherthanarestrictivesense.Itwill,however,beevidentthatadditions,subtractions,deletions,andothermodificationsandchangesmaybemadethereuntowithoutdepartingfromthebroaderspiritandscopeassetforthintheclaims.Thus,althoughspecificexampleshavebeendescribed,thesearenotintendedtobelimiting.Variousmodificationsandequivalentsarewithinthescopeofthefollowingclaims.

Intheforegoingdescription,forthepurposesofillustration,methodsweredescribedinaparticularorder.Itshouldbeappreciatedthatinalternateexamples,themethodsmaybeperformedinadifferentorderthanthatdescribed.Itshouldalsobeappreciatedthatthemethodsdescribedabovemaybeperformedbyhardwarecomponentsormaybeembodiedinsequencesofmachine-executableinstructions,whichmaybeusedtocauseamachine,suchasageneral-purposeorspecial-purposeprocessororlogiccircuitsprogrammedwiththeinstructionstoperformthemethods.Thesemachine-executableinstructionsmaybestoredononeormoremachinereadablemediums,suchasCD-ROMsorothertypeofopticaldisks,floppydiskettes,ROMs,RAMs,EPROMs,EEPROMs,magneticoropticalcards,flashmemory,orothertypesofmachine-readablemediumssuitableforstoringelectronicinstructions.Alternatively,themethodsmaybeperformedbyacombinationofhardwareandsoftware.

Wherecomponentsaredescribedasbeingconfiguredtoperformcertainoperations,suchconfigurationmaybeaccomplished,forexample,bydesigningelectroniccircuitsorotherhardwaretoperformtheoperation,byprogrammingprogrammableelectroniccircuits(e.g.,microprocessors,orothersuitableelectroniccircuits)toperformtheoperation,oranycombinationthereof.

Whileillustrativeexamplesoftheapplicationhavebeendescribedindetailherein,itistobeunderstoodthattheinventiveconceptsmaybeotherwisevariouslyembodiedandemployed,andthattheappendedclaimsareintendedtobeconstruedtoincludesuchvariations,exceptaslimitedbythepriorart.

THE END
1.或MicrosoftLearn我们使用可选的 Cookie,通过社交媒体连接等方式改善你在我们网站上的体验,并且根据你的在线活动投放个性化的广告。 如果你拒绝可选 Cookie,则我们将仅使用为你提供服务所必须的 Cookie。 你可以单击页面底部的“管理 Cookie”更改你的选择。隐私声明 第三方 Cookie 接受 拒绝 管理Cookie Microsoft Learn Challenge Nohttps://docs.microsoft.com/zh-cn/cpp/c-runtime-library/reference/or?view=msvc-170
2.编程学习宝藏网站,不容错过!这个网站拥有海量的视频教程,无论你是编程零基础的小白,还是已经有一定基础想要进阶提升的学习者,都能在这里找到适合自己的课程。对于 C 语言的学习,教程从最基础的语法、数据类型开始讲解,循序渐进,以通俗易懂的方式帮助你建立起扎实的编程基础。 老师们在视频中不仅详细地阐述了每个知识点,还会结合实际案例进行演示https://zhuanlan.zhihu.com/p/12060482532
3.学习导航网studynav网站介绍学习导航网是一个一站式学习网址导航网站,旨在网罗各类学习资源,帮助用户成为学霸。该网站提供了丰富的学习工具和资源,涵盖了多个学习领域,包括但不限于幼小初高电子课本、外语学习、数理化生、考公考研、音乐艺术、电脑办公、设计创意、编程算法以及文学历史等。 https://www.163.com/dy/article/JJRS09M40556B4YR.html
4.OnlineLearningSystemTheSims4ModsLearn up to 40+ Skills (incl. Vet & Hidden Skills) online via the Online Learning System (OLS)https://www.curseforge.com/sims4/mods/online-learning-system
5.OnlineLearningSystemjava源码下载平台OnlineLearningSystemゝE**虐心 在2024-11-29 06:27:19 访问0 Bytes OnlineLearningSystem是一个在线学习系统的管理系统,用于管理在线课程、教师、学生和学习进度。该系统采用模块化设计,包括用户管理、课程管理、教师管理、学生管理和学习进度管理等功能模块。 用户管理模块负责注册新用户、登录验证、权限分配和用户信息https://java.code.coder100.com/index/index/content/id/62197
6.SSM+在线学习平台毕业设计附源码211707This paper mainly introduces the online learning system, including the research status and the development background involved, and then discusses the design objectives of the system, the requirements of the system and the whole design scheme. The design and implementation of the system are also dishttps://blog.csdn.net/weixin_BYSJ1987/article/details/126927243
7.JOLTJoin an organization devoted to online learning such as MERLOT, particularly if other faculty members do not seem interested. Second, get to know someone on campus with technical expertise who can easily be called on for advice and assistance. Questions invariably will emerge at the beginning, https://jolt.merlot.org/vol5no2/miller_0609.htm
8.PlanningEveryone in this course will be building an online learning community, a site where users teach each other. The work may be done alone or in groups of two or three students. Ideally, you or your instructors will find a real client for you, someone who wants to publish and administer thehttp://philip.greenspun.com/seia/planning
9.2016华南理工大学网络教育专升本入学考试《大学英语》测试4. Which of the following is the main factor that makes it difficult to define students' perceptions of online learning definitely? A. Learners' varied locations. B. Learners' varied characteristics. C. Learners' varied communication skills. http://www.5184pass.com/aspcms/news/2016-8-8/4529.html
10.GenerativeProgramming–LearnProgrammingOnce installed onto the computer system, redirecting your browser is the purpose of the adware. Security specialists dubbed Bundled Software Uninstaller as yet another application that was infamous. It comes along with adware such as Online Weather BrowserProtect, along with Babylon Toolbar. These http://generative-programming.org/
11.SSISystemSolutionsInc.OnlineLearning TheLeague for… OnlineCommunity CaliforniaAssociation of… OnlineLearning BuildingOwners and… OnlineCommunity IndianaBankers Association MobileFriendly AmericanSociety of… OnlineCommunity TireIndustry Association MobileFriendly AmericanAcademy of… https://www.systemsolutionsdevelopment.com/
12.EnglishModule1.4online learning in the sense of distance learning on the Internet. Because of a lack of agreement on what e-learning is all about, it probably makes sense to use the term online learning when talking about distance learning on the Internet and to use CALL (Computer Assisted Language Learninghttp://www.ict4lt.org/en/en_mod1-4.htm
13.ApplicationsofreinforcementlearninginenergysystemsPublications of the energy system domain are divided into 11 subgroups and reviewed. ? Many publications report 10–20% performance improvement. ? Deep learning techniques and state-of-the-art actor-critic methods were not used by many articles. ? Batch reinforcement learning algorithms havehttps://www.sciencedirect.com/science/article/pii/S1364032120309023
14.Adoptionofblendedlearning:ChineseuniversitystudentsAgainst the backdrop of the deep integration of the Internet with learning, blended learning offers the advantages of combining online and face-to-face learning to enrich the learning experience and improve knowledge management. Therefore, the objective of this present study is twofold: a. to fillhttps://www.nature.com/articles/s41599-023-01904-7
15.MakingContentUsableforPeoplewithCognitiveandLearningThis document is for people who make web content (web pages) and web applications. It gives advice on how to make content usable for people with cognitive and learning disabilities. This includes, but is not limited to: cognitive disabilities, learning dhttps://www.w3.org/TR/coga-usable/
16.ImpactofExtensionsonBrowserPerformance:AnEmpirical(will be inserted by the editor) Impact of Extensions on Browser Performance: An Empirical Study on Google Chrome Bihui Jin · Heng Li · Ying Zou Received: date / Accepted: date Abstract Web browsers have been used widely by users to conduct various online activities, such as information http://arxiv.org/pdf/2404.06827
17.GitHubwildcard/awesomeCanvas LMS - Canvas is the trusted, open-source learning management system (LMS) that is revolutionizing the way we educate. (Demo, Source Code) AGPL-3.0 Ruby Chamilo LMS - Chamilo LMS allows you to create a virtual campus for the provision of online or semi-online training. (Source Codehttps://github.com/wildcard/awesome-selfhosted/
18.TheWomen’sColony–Talkingaboutallwomen'stopicsKnowledge about gaming trends, hardware advancements, and gaming strategies provides couples with ample opportunities to engage in shared learning experiences. You might also be interested in reading Navigating the Digital Terrain: Building Authentic Connections Online. Nurturing Bonds through a Digital http://thewomenscolony.com/
19.19.Manyoftheworld'scountrieshavereachedorareC.Get prepared to complete the online courses. D.Get prepared to make materials for the edX courses. 31.What can be said about MITx according to the text?C A.It is first offered as part of the edX learning program. B.It is another free MIT-Harvard online learning program. http://www.1010jiajiao.com/gzyy/shiti_page_137828
20.SmartGroup:AToolforSmallDevelopment of a class model for improving creative collaboration based on the online learning system (Moodle) in Korea. J. Open Innov. Technol. Mark. Complex. 2019, 5, 67. [Google Scholar] [CrossRef] [Green Version] Motz, B.A.; Quick, J.D.; Morrone, A.S.; Flynn, R.; Blumberg,https://www.mdpi.com/1999-5903/15/1/7/xml
21.OnlineLearningOnline Learning Your Connection to Learning Anytime ? Anywhere Faculty & Staff Faculty Online Support The office of Online Learning offersworkshops, learning communities, and liveDrop-in hourson Zoom throughout the week. Our workshops and events are great places to learn more about teaching https://foothill.edu/onlinelearning/
22.台逥IAView工業組態系統台达DIAVIew 组态软件 SCADA(Supervisory Control And Data Acquisition)工业组态系统是一套架构在PC计算机上具有实时系统监控,数据撷取和分析功能的自动化管理系统,可协助管理者 采集整厂的数据及规划产线可视化的管理接口,让客户轻松实现远程监控、系统管理和全厂信息https://www.deltaww.com.cn/DIAView/zh-TW/OnlineLearning/Player/aa1eed77-55dd-4bf0-af14-dc54441b85c5
23.FrontiersEmbodiedObjectRepresentationLearningand(2020). “Improving generative imagination in object-centric world models,” in Proceedings of the 37th International Conference on Machine Learning (PMLR), 6140–6149. Available online at: http://proceedings.mlr.press/v119/lin20f.html Liu, F., Fang, P., Yao, Z., and Yang, H. (2019)https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2022.840658/full
24.新视野大学英语读写(第二版)一至四册答案(全集).pdflearnforanonlinecourse,butthey canalsotaketimetothinkthroughanswersbefore makingareply. 2.Sheisexcitedbytheideaofonlinelearningwhile beconsidersitmeaninglessand useless. 3.CommunicatingwithnativeEnglishspeakersisa veryrewardingexperiencefrom whichwecanlearnalot. 4.Today,moreandmorepeoplehaveaccesstothe Internetthrhttps://www.renrendoc.com/paper/257515020.html
25.ElementsofinformationtheoryGuidebooksBenedetto F, Mastroeni L and Vellucci P (2020) Extraction of Information Content Exchange in Financial Markets by an Entropy Analysis, ACM Transactions on Management Information Systems, 12:1, (1-16), Online publication date: 1-Mar-2021. Wang Y, Zhu G, Li J, Conti M and Huang J (2021http://portal.acm.org/citation.cfm?id=129837&dl=ACM&coll=GUIDE
26.TheCognitiveAffectiveFor a long time, research on individuals learning in digital environments was primarily based on cognitive-oriented theories. This paper aims at providinghttps://link.springer.com/article/10.1007/s10648-021-09626-5